5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Congratulations to the CodeProject.AI development team on version 1.6.
I was able to install the application, disable all modules except the Object Detection (YOLO), and change them all to CPU.
I then basically just changed the IP address and port for the AI in Blue Iris, and immediately began getting alerts.
I installed the 1.6.1-beta version yesterday and it began working without any further adjustment.
So version 1.6 has been running for more than 48 hours with detection times similar to the results I was seeing with my Jetson Nano Deepstack device.
I HAVE ACTUALLY SHUT DOWN THE JETSON NANO!!!
I don't think I need it as an emergency backup for my production Blue Iris machine.
I will give this a week or so, and then I will try pulling the CodeProject.AI Docker container on the Jetson Nano to see if it will work there with the on board NVIDIA GPU.
Again, many thanks to the CodeProject team on this latest accomplishment.

Steve

CodeProject.AI version 1.6.1-beta Windows Installation
HP Compaq 6200 Pro SFF PC
Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, 3300 Mhz, 2 Core(s), 2 Logical Processor(s)
Microsoft Windows 10 Pro
Version 21H2 Build 19044
NVIDIA GeForce GT 710 (Not used for CodeProject.AI at this time)
 
All I have fixed it; the issue was under the alerts tab. I did not have "any" zone selected but "all"

What do you all remcomend for the AI camera settings on image processing time and real time images?

I'm currently at 250ms with 3 images, set up is an 11th gen i5 with 32gb of RAM and a 4gb GTX 1650
 
I tried, I just get this:
8:29:32 PM: Object Detection (YOLO): Object Detection (YOLO) started.
8:29:32 PM: CodeProject.BackendProcessRunner: C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\cuda\init.py:120: UserWarning:
8:29:32 PM: CodeProject.BackendProcessRunner: Found GPU%d %s which is of cuda capability %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: PyTorch no longer supports this GPU because it is too old.
8:29:32 PM: CodeProject.BackendProcessRunner: The minimum cuda capability supported by this library is %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch / 10, min_arch % 10))
8:29:32 PM: CodeProject.BackendProcessRunner: detect_adapter.py: APPDIR: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo

Going back to CPU
 
I tried, I just get this:
8:29:32 PM: Object Detection (YOLO): Object Detection (YOLO) started.
8:29:32 PM: CodeProject.BackendProcessRunner: C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\cuda\init.py:120: UserWarning:
8:29:32 PM: CodeProject.BackendProcessRunner: Found GPU%d %s which is of cuda capability %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: PyTorch no longer supports this GPU because it is too old.
8:29:32 PM: CodeProject.BackendProcessRunner: The minimum cuda capability supported by this library is %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch / 10, min_arch % 10))
8:29:32 PM: CodeProject.BackendProcessRunner: detect_adapter.py: APPDIR: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo

Going back to CPU


I had a 730 which gave me the same issue, even though the site says it's supported it's not, I recommend a gtx 1030 or better
 
I tried, I just get this:
8:29:32 PM: Object Detection (YOLO): Object Detection (YOLO) started.
8:29:32 PM: CodeProject.BackendProcessRunner: C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\cuda\init.py:120: UserWarning:
8:29:32 PM: CodeProject.BackendProcessRunner: Found GPU%d %s which is of cuda capability %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: PyTorch no longer supports this GPU because it is too old.
8:29:32 PM: CodeProject.BackendProcessRunner: The minimum cuda capability supported by this library is %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch / 10, min_arch % 10))
8:29:32 PM: CodeProject.BackendProcessRunner: detect_adapter.py: APPDIR: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo

Going back to CPU
This is what I got when I tried to use DeepStack on this machine with the GT 710. I actually tried going back to earlier versions of Deepstack and Cuda, and got really mixed up.
When I tried the CPAI explore window, I got either no results, or an error message "Couldn't start module."
I'm back to CPU, it should be working, I was getting 150 - 400 msec before.

Damn. Now I'm getting "Nothing Found" mostly. I'm good at breaking things. Some cars, but it's night.

Tomorrow is another day.
 
All I have fixed it; the issue was under the alerts tab. I did not have "any" zone selected but "all"

What do you all remcomend for the AI camera settings on image processing time and real time images?

I'm currently at 250ms with 3 images, set up is an 11th gen i5 with 32gb of RAM and a 4gb GTX 1650

Sweet, glad to see you got it working! I was drafting up a reply regarding what you meant by Blue Iris is not responding including checking the logs for AI activity, checking the timeline, and alerts, etc.

For image processing time I have the following set. I think there are going to be too many variables for a one size fits all approach. Like everything else in life, "it depends".

Make sure you have something like banana or giraffe in the "To Cancel" box or AI will stop processing images as soon as it finds something in your "To Confirm box", pretty much making whatever setting you have in real-time images pretty moot if you have a parked car in your view for instance. From playing around with the settings in a very limited capacity, when I set it to 999 some alerts could take 999 X's 250ms ~ 4 minutes to fire. It seems like some even took longer than that, but I don't remember for sure.
Bottom line you have to figure out how many images is "enough" and how many is "too much" based on how long you think is long enough for an object in the To Confirm box to be found vs how much time is acceptable to delay the alert looking for these objects.

1664243217792.png

I've found that since adding giraffe in "To Cancel" and real-time images to 20, that I no longer get as many cancelled alerts due to car headlights at night. Is it perfect, nope... Will it ever be, probably not. Do I fully understand what the best settings and what all happens under the hood, absolutely not. As with everything else...YMMV.

If someone has some better settings to try, I would def also be game to try them, but just sharing what little I know in case it helps anyone, with the disclaimer that I don't know much about AI :lol: .
 
  • Like
Reactions: sebastiantombs
I had a 730 which gave me the same issue, even though the site says it's supported it's not, I recommend a gtx 1030 or better
Yes, at the beginning, I thought all I needed was an Nvidia card. So I bought this one, it was available and budget friendly.

Lately, I've been referencing this page:


Shows what cards are supported by various versions of Cuda.

The Nvidia page is missing some, including mine.

I find a lot of GT 1050s, but not many GTX 1050s.

Anyway, the app is working acceptably in cpu mode for me now.
 
I tried, I just get this:
8:29:32 PM: Object Detection (YOLO): Object Detection (YOLO) started.
8:29:32 PM: CodeProject.BackendProcessRunner: C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\cuda\init.py:120: UserWarning:
8:29:32 PM: CodeProject.BackendProcessRunner: Found GPU%d %s which is of cuda capability %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: PyTorch no longer supports this GPU because it is too old.
8:29:32 PM: CodeProject.BackendProcessRunner: The minimum cuda capability supported by this library is %d.%d.
8:29:32 PM: CodeProject.BackendProcessRunner: warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch / 10, min_arch % 10))
8:29:32 PM: CodeProject.BackendProcessRunner: detect_adapter.py: APPDIR: C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo

Going back to CPU
Thanks for trying, lose some win some.
 
  • Like
Reactions: sebastiantombs
Can someone tell me in short what are advantages of SenseAI vs Deepstack, except Deepstack stopped development / not updating anymore? I have 6 cameras, nVidia 1030 2GB, and GPU Deepstack with custom MikeLud1's General DeepStack Model, and I have great accuracy and times around 50-120ms. For now I don't see a reason to move from Deepstack... Correct me if I'm wrong?
 
Is there any way to improve recognition of humans? I am running the basic setup and am surprised to find the intruder captures did not reach at least 80% (my set confirmation).
 

Attachments

  • Person 56 percent.jpg
    Person 56 percent.jpg
    180 KB · Views: 56
  • Person 74 percent.jpg
    Person 74 percent.jpg
    176.1 KB · Views: 53
Is there any way to improve recognition of humans? I am running the basic setup and am surprised to find the intruder captures did not reach at least 80% (my set confirmation).
To improve accuracy you can make the below change to the modulesettings.json. Also you should uncheck Use main stream if available, having it check does not improve accuracy it only slows down the detection.

1664284282825.png
1664284177073.png
1664284444233.png
 
  • Like
Reactions: looney2ns
I'm trying to get version 1.6.1 working in docker. But I don't understand how to do it. I can only enable Object Detection (YOLO) and not Object Detection (.NET). When I do enable I get some error messages, see the picture below:
CP Error.png

What do I need to do to get 1.6.1 working in docker in combination with blue iris?
 
I'm trying to get version 1.6.1 working in docker. But I don't understand how to do it. I can only enable Object Detection (YOLO) and not Object Detection (.NET). When I do enable I get some error messages, see the picture below:
View attachment 141055

What do I need to do to get 1.6.1 working in docker in combination with blue iris?
I tried an earlier version of the app in Docker, and found that because I was using docker with WSL, it was insisting on a Linux driver.
You could try unchecking Use WSL on the docker options page if you haven't already.
 
I'm trying to get version 1.6.1 working in docker. But I don't understand how to do it. I can only enable Object Detection (YOLO) and not Object Detection (.NET). When I do enable I get some error messages, see the picture below:
View attachment 141055

What do I need to do to get 1.6.1 working in docker in combination with blue iris?
You can not have both enable only one Object Detection can be enabled YOLO or .NET
 
I tried an earlier version of the app in Docker, and found that because I was using docker with WSL, it was insisting on a Linux driver.
You could try unchecking Use WSL on the docker options page if you haven't already.

I use it on a Unraid system. So there is no WSL option as var as I know.

You can not have both enable only one Object Detection can be enabled YOLO or .NET

I know. But if I disable YOLO, I still can not enable .NET. Not sure what I need to do here.
 
How do you know you are running in High or Medium?
Is the grayed box in setting/AI accurate? Mine says high, I am running .net

Also,
I was under the impression .net is better for people than Yolo, is that correct? or try both and test for my situation?