5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Code Project AI is working in Blue Iris, wonderful and thanks again for all your work Mike.
I changed the port back to 5000 from 32168.

The Dashboard shows I'm on GPU(CUDA) for Portrait Filter, Object Detection (YOLO), Scene Classification and Face Processing, whilst Background Remover is CPU nominally, and Object Detection (NET) is greyed and on CPU.

It's early days and I'm sure I've a lot of camera configurations to do when I have the time.

Edit: after logging off I notice in my Alert clips there's now a person icon showing along the top that AI has confirmed presumably.
 
  • Like
Reactions: Philip Gonzales
Hello all,

I can't get triggers to work, I've used mikes python script update and I have made sure detection works in the webapp and have test push notifications working. I am using a GTX 1650 with a i5 11600K
under Logs I see "deleted: nothing to do"
1664152166428.png
1664151830544.png1664151778998.png1664151866168.png
 
Hello all,

I can't get triggers to work, I've used mikes python script update and I have made sure detection works in the webapp and have test push notifications working. I am using a GTX 1650 with a i5 11600K
under Logs I see "deleted: nothing to do"
View attachment 140941
View attachment 140939View attachment 140938View attachment 140940

"Delete: nothing to do..." Is not related to AI. This is related to your clips and archiving setting and the regular housekeeping that Blue Iris does to keep your drives within the limits you have set. If there is nothing to delete yet then this is a normal message.

"AI: Alert cancelled [nothing found] xx ms" is related to SenseAI (or Deepstack).

Does your global AI tab look like this?

Screenshot_20220925-195136.png

I would disable or stop all the modules or what not that you are not using. See below. I'm no expert, but I figure why tie up the resources if you aren't using the specific feature yet. Get the basics working first.
Screenshot_20220925-194609.png
 
I'm disappointed, I'm getting no better results with the new software, 1.6.Beta. BI Status log still shows lots of green ticks but "Nothing found". I'm on CUDA 11.7.1 + cuDNN v8.5.0, I've just updated the driver to 517.40 (dated 2022.9.20) for my Quadro T600 GPU, a Nvidia GPU in the RTX/Quadro series. Apart from still having the Reg edit for the custom models, previous re-installs of Code Project AI, uninstall of DeepStack was a while back, I think I'm mostly using standard installs. BI is on 5.6.1.3.

I get scene detection using test images in the browser as has been the case for some time, but anything else is "No predictions returned". No squares round any objects identified which is what I did have with DeepStack.

I've other jobs to attend to today but now and again I'll revisit this thread to get some more ideas and things to try. I remain mystified why this works for some and not others, I'm not the only one who hasn't got it working.
edit: typo corrected
Each to their own, but when I run benchmarks the performance gains with custom models, in my eyes are not worth it. I just run it out of the box, with zero custom config, no reghack and it just works.
Currently using GPU on 1 rig and CPU on another, given the CPU version is so efficient think I will drop the GPU.
 
spammenotinoz, you quoted an earlier post of mine, Code Project AI is now working with BI after using Mike's latest process.py file, and changing back to using port 5000, so here that's AI Server on 127.0.0.1 and port 5000 in BI Global Settings AI tab.
I can well imaging there may be performance differences between using CPU or GPU as there were with DeepStack.
 
spammenotinoz, you quoted an earlier post of mine, Code Project AI is now working with BI after using Mike's latest process.py file, and changing back to using port 5000, so here that's AI Server on 127.0.0.1 and port 5000 in BI Global Settings AI tab.
I can well imaging there may be performance differences between using CPU or GPU as there were with DeepStack.
Sorry about that...:) Yes definitely PERFORMANCE improvements, but not efficiency. My GPU instance is docker desktop on Windows, so a single command to "deploy" once, and self-updates so always have the latest. No file editing, no need to customise, just works.
 
Each to their own, but when I run benchmarks the performance gains with custom models, in my eyes are not worth it. I just run it out of the box, with zero custom config, no reghack and it just works.
Currently using GPU on 1 rig and CPU on another, given the CPU version is so efficient think I will drop the GPU.


I am curious to know...how many motion triggers per minute are you processing according to the BI AI tab? My system is averaging 60-80 per minute and I have found that when it comes to the busier hours my i5-10400 starts to get too busy for my likeing as compared to my GTX 1050 which never breaks a sweat
 
I am curious to know...how many motion triggers per minute are you processing according to the BI AI tab? My system is averaging 60-80 per minute and I have found that when it comes to the busier hours my i5-10400 starts to get too busy for my likeing as compared to my GTX 1050 which never breaks a sweat
That's why 1 case doesn't fit all. I run a i7-10700 and average about 12 hits per minute across 10 cameras. (obviously peaks are much higher). I constantly record, and task BI Object detection with avoiding a lot of false triggers. I also don't have my overview cams running through AI, never saw the point in doubling up.
In the past I did try and stress test both Deepstack CPU and ProjectAI CPU that required running multiple instances (all on different ports) as neither Deepstack or Project AI would max out resouces.
These days when I benchmark different models, I will use the Benchmark feature within ProjectAI.

Now if you have a GTX 1050, I would also be using the GPU as I could not think of a more efficient GPU to use for this task, especially if you undervolt and underclock it.
 
Can you try replacing process.py file with the attached in the below folder shown is the screenshot, this might fix the 1660 issue
View attachment 140750
Hi Mike;
I have been hitting my head trying to get my GT 710 to work with the latest versions of cuda.
I am currently running CodeProject.AI version 1.6.1, so I thought what the heck, It's worth a try, so I created a folder named backup in the C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo folder and moved the original process.py file there. I copied your process.py file to the ObjectDetectionYolo folder, restarted the service, and immediately broke CPAI server. I was able to replace the original process.py file, but I had to also remove the backup folder I had created for the server to begin functioning again.
I've read some places that the issue is with Cuda 11 using a version of pytorch that no longer supports gpus that old, that you have to recompile pytorch to get it to work.
Any way, the CPU version of the ObjectDetectionYOLO modual is working fine now since upgrading to version 1.6.
Thanks for your work with these issues, and the custom models, I've been using these for a while now.
Steve
CodeProject.AI version 1.6.1-beta Windows Installation
HP Compaq 6200 Pro SFF PC
Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz, 3300 Mhz, 2 Core(s), 2 Logical Processor(s)
Microsoft Windows 10 Pro
Version 21H2 Build 19044
NVIDIA GeForce GT 710 (Not used for CodeProject.AI at this time)
 
That's why 1 case doesn't fit all. I run a i7-10700 and average about 12 hits per minute across 10 cameras. (obviously peaks are much higher). I constantly record, and task BI Object detection with avoiding a lot of false triggers. I also don't have my overview cams running through AI, never saw the point in doubling up.
In the past I did try and stress test both Deepstack CPU and ProjectAI CPU that required running multiple instances (all on different ports) as neither Deepstack or Project AI would max out resouces.
These days when I benchmark different models, I will use the Benchmark feature within ProjectAI.

Now if you have a GTX 1050, I would also be using the GPU as I could not think of a more efficient GPU to use for this task, especially if you undervolt and underclock it.

Your exactly right, 1 case doesn't fit all - In my case I rely on having AI offloaded to the GPU as my CPU is an older i5 6600 which can quickly become bogged down if I run AI on CPU as it is also processing BI and the many other tasks I have it running (it's a multifunctional system) everything is dependent on the CPU whereas only the AI is dependent on the GPU, by running the AI on the GPU I don't need to be concerned about using really sensitive triggers and hammering it with loads of requests because even if I overload the GPU there is no risk to maxing out the CPU (even though I don't even my GTX 970 munches the AI requests without breaking a sweat). I also find the GPU processing time much quicker on my system, but that might be down to an ageing CPU
 
My Beta testing rig is an i5 -4500 @3.30GHz also with a GTX 1050.

Even with AI and all decoding offloaded to the GPU, the CPU still seems to struggle. Averaging 50%+ , sometimes spiking to 100% and not coming down for long periods... if ever. Unless it is just the new version of BI causing it.. I am running 5.5.9.6 on my production rig.
 
"Delete: nothing to do..." Is not related to AI. This is related to your clips and archiving setting and the regular housekeeping that Blue Iris does to keep your drives within the limits you have set. If there is nothing to delete yet then this is a normal message.

"AI: Alert cancelled [nothing found] xx ms" is related to SenseAI (or Deepstack).

Does your global AI tab look like this?

View attachment 140943

I would disable or stop all the modules or what not that you are not using. See below. I'm no expert, but I figure why tie up the resources if you aren't using the specific feature yet. Get the basics working first.
View attachment 140942




Hi, I have changed my CodeSene settings to match yours and my global AI settings match but Still no luck, I see that it's being used by the counter next to it but BI is not responding. Could someone look over my settings with me in screen share to help me, I'm willing to pay you for your time
 
Hi, I have changed my CodeSene settings to match yours and my global AI settings match but Still no luck, I see that it's being used by the counter next to it but BI is not responding. Could someone look over my settings with me in screen share to help me, I'm willing to pay you for your time
What GPU do you have?