5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

So I've been testing. Thanks for pointing out the weird general/general1 thing. Had me pulling my hair out why it wasn't using the model. Detection seems faster, but . . .

While testing/tuning I still get boxes drawn for suitcase, tv, and potted plant. It indicates that it's using the general1 model, but its clearly still detecting things other than person and vehicle. Not sure what's happening.

Anyway. Thank you Mike for all the work you're putting into the custom models.
If you are only using custom models you need to uncheck Default object detection.
1656132183201.png
 
Yeah. That's been unchecked. It doesn't matter it's still running both license plate and yolo models alongside general1
 
Yeah. That's how it is setup. I was running your custom model with Deepstack. All was working fine before. It's curious. I appreciate you trying to troubleshoot, but don't feel any obligation to spend any time on this now. Things may well change when Blue Iris gets updated. Given the bug you found regarding general, it wouldn't be surprising if there were more oddities buried in the code that forced the yolo model. I was wondering if it had something to do with the save AI analysis details but I tested with and without that option on old and new clips. Didn't matter. When using testing/tuning and the log window, the yolo model was always running. The general model didn't run until I fixed the naming to general1 and put it in the assets folder. No big deal. it's the curse of the bleeding edge.
 
Thanks Mike for posting the correct BI settings. Mine appears to be just detecting the custom models. I am trying combined on some cams and general1 on others and both are giving me great times. Is there another place we can view the log that appears in the SenseAI scrolling window ? Last night I was trying to figure out where it was looking but its tough to read as its moving. Anyway, Looking better all the time :)

Screenshot 2022-06-25 083452.pngScreenshot 2022-06-25 083557.png

UPDATE: I just turned on sub streams on my AI cams and then unchecked "use main stream if available" This made even better timings.
Screenshot 2022-06-25 105602.png

Yet another update: I couldn't wait for Ken to release the update and went ahead and have SenseAI 1.5 going now on my main BI. I am amazed at my times running the combined.pt model. This is a slower machine than my demo, so I was curious if the speeds would hold up....and yes they are !

Screenshot 2022-06-25 153202.png
 
Last edited:
So what appears to be happening is that when running clips through the AI log window with Analyze with AI checked in Testing & Tuning, all custom models in the assets folder are running, regardless of whether they are in your settings.

In normal usage, it appears to work as intended, only running the models you specify in settings. The DAT files created for actual alerts show just the model or models you have enabled.
 
So I've been testing. Thanks for pointing out the weird general/general1 thing. Had me pulling my hair out why it wasn't using the model. Detection seems faster, but . . .

While testing/tuning I still get boxes drawn for suitcase, tv, and potted plant. It indicates that it's using the general1 model, but its clearly still detecting things other than person and vehicle. Not sure what's happening.

Anyway. Thank you Mike for all the work you're putting into the custom models.


Where can I download or locate the general.pt model?
 
I am using the docker-version (on linux) which runs very nice so far. Are the custom models working with the docker version, too, is yes, how?
Probably by putting the custom models somewhere in the right path and enabling them, but what is the right path and which way to enable it..?
 
I am using the docker-version (on linux) which runs very nice so far. Are the custom models working with the docker version, too, is yes, how?
Probably by putting the custom models somewhere in the right path and enabling them, but what is the right path and which way to enable it..?

This is how I did it to use the combined.pt model (which boosts my detection by factor 3) with docker:

This is my docker-compose.yml for SenseAI:

Code:
version: "3.3"

services:
  deepstack:
    image: codeproject/ai-server:latest
    restart: unless-stopped
    container_name: senseai
    ports:
      - "80:5000"
    environment:
      - VISION-SCENE=True
      - VISION-FACE=True
      - VISION-DETECTION=True
      - CUDA_MODE=False
      - MODE=Medium
      - PROFILE=desktop_cpu
      - Modules:TextSummary:Activate=False
      - MODELS_DIR=/usr/share/CodeProject/SenseAI/models
    volumes:
      - /opt/senseai/data:/usr/share/CodeProject/SenseAI:rw

As you can see I've mapped the data path in my Linux VM to "/opt/senseai/data".

I've now created a folder "models" in /opt/senseai/data and put the "combine.pt" file there:

1656330689878.png

Remark: The "yolov5m.pt" is the default model and SenseAI will download it and place it also in the folder at start!

The "MODELS_DIR" environment variable points to the internally mapped path of SenseAI (in this case: /usr/share/CodeProject/SenseAI/models)

Then I fire up the docker with "docker-compose up -d".

It took me a while to learn that BI needs the model file as well ... so I've uploaded the "combined.pt" to "C:\BlueIris\Models" in my BI-VM as well. Then:

1656331016061.png

Finally you need to enter the model in each camera:

1656331272831.png

That's it.
 
However, you pay with preciseness as I've seen. Running both, the default and the combined model, the combined one often misses persons and cars which are quite small. The default model takes much more time, but detects ways better.

I think the golden solution would be the default model with just all the unnecessary crap like giraffes etc. removed.

For now I need to stay with the default one.
 
I see that, too, however in this case it's fine with the combined model for me, too.
But I am experimenting anyway with the SenseAI and probably will be some more time as features and bugfixes seem to come quite fast at the moment.
 
What path are you using to start senseai? Is there a walk through tutorial on setting this up from scratch?
 
Hi, i know that if triggered then AI will start at either the trigger time or pre-trigger time, via the check box, but my question is once AI starts analysing for eg, 6 images every 500ms = 3seconds in total, does AI stop looking for things after the 3 seconds if the original trigger is still active. Thanks
 
i'm a bit confuised now
"use main stream if available"

What does this actually do? I had it checked but in this post most are suggesting it should be unchecked?

Does this setting mean it uses the main stream or does unchecking it mean it uses the main stream? (i have sub streams setup on all cameras)
 
i'm a bit confuised now
"use main stream if available"

What does this actually do? I had it checked but in this post most are suggesting it should be unchecked?

Does this setting mean it uses the main stream or does unchecking it mean it uses the main stream? (i have sub streams setup on all cameras)

When checked, it uses the camera main stream. But as the AI resizes to 640px anyhow, it needs additional CPU power for resizing before analyzing. So in most cases feeding the sub stream is better.