5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

To me, again just the opinion of a tired old skeptic, if it takes .json edits AND reg edits it really isn't ready for prime time yet. I have no interest in spending an afternoon fooling with files and the registry to hopefully get SenseAI working properly, no matter how fast it may be.
 
If you’ve got the green boxes in the SenseAI UI then something else has to be missing. If you could post a screenshot of the AI tab in BI and one of your camera triggers it may help.
 
Thank you for posting your settings sir.
It helped confirm most of what I had already.

I am still at a loss how to actually edit the json to turn off the various modules running by CPAI.

I used Notepad++ to edit it. Updated my post above with a screen shot. Basically, change true to false were it says activate in the modulesettings file. To be honest, I have no idea if it speeds it up any. I didn't notice much of a difference in the times.
 
  • Like
Reactions: slidermike
I used Notepad++ to edit it. Updated my post above with a screen shot. Basically, change true to false were it says activate in the modulesettings file. To be honest, I have no idea if it speeds it up any. I didn't notice much of a difference in the times.
It does not speed detection times up but it does save on PC memory and GPU memory usage.
 
To me, again just the opinion of a tired old skeptic, if it takes .json edits AND reg edits it really isn't ready for prime time yet. I have no interest in spending an afternoon fooling with files and the registry to hopefully get SenseAI working properly, no matter how fast it may be.

Exactly.

SenseAI at the moment sounds more like the prior implementation of DS as a 3rd party like AI Tools and some of the other ones that popped up before the BI "integration" of DeepStack. AI Tools and other similar setups worked well for folks that could run a Docker and knew some code and that sort of thing.

The relative simplicity of the DS integration with BI that was basically download the DS .exe file and run it and then check the DS box in BI and you were off and running.

SenseAI sounds like it is nowhere near to that point at the moment.
 
Exactly.

SenseAI at the moment sounds more like the prior implementation of DS as a 3rd party like AI Tools and some of the other ones that popped up before the BI "integration" of DeepStack. AI Tools and other similar setups worked well for folks that could run a Docker and knew some code and that sort of thing.

The relative simplicity of the DS integration with BI that was basically download the DS .exe file and run it and then check the DS box in BI and you were off and running.

SenseAI sounds like it is nowhere near to that point at the moment.

How soon you have forgotten hundreds of pages we went through to get Deepstack going. Just to get custom models took a 38 page thread and this is only up to 19 and look how far it has come. This thread should be for people wanting to help getting it to work and not people bashing it for being in beta stage. If it is too much for you then don't update, I myself think it will put a end to deepstack eventually, so you may as well get on board. It won't be long and BI will have it down to a level where everyone can enjoy the new AI, Its almost there now. I'm just thankful that MikeLud1 has spent many hours to contribute to this project. I have been running CodeProject now for over 2 months and have yet had BI crash. And then throw in the fact I have been running BI on W11 for a month without ANY issues is also too much for some to comprehend :)
 
While I agree that eventually may (probably) will replace DS eventually I'm just trying to point out the problems as I see them. IIRC I did the same thing with DS. I, too, after all I'm a skeptic and a curmudgeon, I appreciate what Mike has done in terms of testing and documenting SenseAI, let alone all the time and effort he's put into custom models. Additionally, I know that I never had to edit config files directly for DS nor did I have to edit the registry. Worst case was an un-install and a re-install.
 
Neither version of AI within BI would be anywhere near where it is at now without the contributions of @MikeLud1 !!!!! We cannot thank him enough for his contributions.

I was one of the early testers of the BI integration literally updating BI the moment it came out before anyone else here and reporting the success or failure. Even with all the pages of Deepstack to get it going, I never had to go in and edit reg files and .json edits. After the initial download of Deespstack, EVERY user input was within a BI screen, not reg edits and .json edits. It was more of Ken figuring out the coding to make it work and then us figuring out was there a space in between car,person or not LOL.

All I am saying (and @sebastiantombs and others) is that BI and SenseAI needs to get to the point of where Deepstack was when it was abandoned or there will be tons of clusterf*cked systems of people screwing up their registry.
 
To me, again just the opinion of a tired old skeptic, if it takes .json edits AND reg edits it really isn't ready for prime time yet. I have no interest in spending an afternoon fooling with files and the registry to hopefully get SenseAI working properly, no matter how fast it may be.
Again, people only do that because they want to tinker and optimise the heck out of it with custom models. I've got a test systems on old hardware they work fine using out of the box settings. If you just install it, it just works.
Pointed out earlier if you use docker that a single command is all it takes to download\install QuestAI, like seriously how simpler could it be?

GPU Version
docker run -d -p 5000:5000 --name 'SenseAI-ServerGPU' --restart always --gpus all -e VISION-SCENE=false -e VISION-FACE=false -e VISION-GENERAL=true -e Mode=Medium codeproject/ai-server:gpu

CPU Version
docker run -d -p 5000:5000 --name 'SenseAI-ServerCPU' --restart always -e VISION-SCENE=false -e VISION-FACE=false -e VISION-GENERAL=true -e Mode=Medium codeproject/ai-server

BlueIris config is identical (BlueIris doesn't need to auto\start-stop QuestAI, the above docker command ensures docker container is always running even after a restart).
1661134580126.png
 
Again, people only do that because they want to tinker and optimise the heck out of it with custom models. I've got a test systems on old hardware they work fine using out of the box settings. If you just install it, it just works.

Pointed out earlier if you use docker that a single command is all it takes to download\install QuestAI, like seriously how simpler could it be?

Running a Docker is what kept many people away from the 3rd party platforms like AI tools or OnGuard or Last Watch AI....

I have no desire to run a Docker or anything associated with that. That was the beauty of the simplicity that the BI Deepstack Integration was - it was intended for the masses that didn't want to deal with Dockers and stuff. Sure it wasn't as capable as those running something like AI tools, but for the majority, it was sufficient. After a dowload and run of the .exe file, it was check some boxes and type in the objects I wanted DS to confirm.
 
Last edited:
CodeProject.AI team has been very open for suggestion and implementing them, not like the DeepStack team, they would put a release out and then you do not here from them for months. Below are some of the requests that I made.

Add custom models and include my models - Implemented
Add the ability to benchmark custom models - Implemented
Fix speed issue - Fixed
Add a way to disable modules from the dashboard - I am sure they will implement this in the up come week or so.

So if anyone has any suggestion let me know and I will pass them along
 
Okay, so just further to my issue posted earlier.. where 1.5.6.2 does not work.

I've done a complete uninstall and reinstall of 1.5.6 and all of the AI stuff works immediately... I then reinstalled 1.5.6.2 again, and nothing!! "No predictions"
 
I have no desire to run a Docker or anything associated with that. That was the beauty of the simplicity that the BI Deepstack Integration was - it was intended for the masses that didn't want to deal with Dockers and stuff. Sure it wasn't as capable as those running something like AI tools, but for the majority, it was sufficient. After a dowload and run of the .exe file, it was check some boxes and type in the objects I wanted DS to confirm.
Again if you don't want to use docker, you just download and install. There is no need for registry files, manual config files if like Deepstack you just want to use the default general profiles.
From a future perspective, encourage everyone to consider docker, no clean-ups, worrying about dependencies, it's the future.
 
Running a Docker is what kept many people away from the 3rd party platforms like AI tools or OnGuard or Last Watch AI....

I have no desire to run a Docker or anything associated with that. That was the beauty of the simplicity that the BI Deepstack Integration was - it was intended for the masses that didn't want to deal with Dockers and stuff. Sure it wasn't as capable as those running something like AI tools, but for the majority, it was sufficient. After a dowload and run of the .exe file, it was check some boxes and type in the objects I wanted DS to confirm.

Can't speak for the other as I never tried them but AITools didn't mandate that DS be run within a Docker container?

In fact when I first tested DS I installed the Windows installer version of DS but found that DS was eating up all the CPU resources, so I then moved DS over to a Docker container and found it to be far less resource hungry.
 
For the life of me I simply can't get SenseAi with Cuda working, I’ve just completely uninstalled everything related to Nvidia and SenseAi (with loads of reboots). Then perform a clean install of Nvidia drivers (with reboot), then Cuda (with reboot), then cuDNN and zlib using the script (confirmed all paths are correct) perform more reboots. Then ran latest CodeProject.AI installation (more reboots). All get is what looks like a perfectly working install with Cuda on the web dashboard (greenboxes with GPU (CUDA)) but can’t detect a single thing using any model. The counters go up when testing, the benchmark gets good scores, I even see Cuda usage on windows task manager while testing/benchmarking. Soon as I edit the json files to use CPU everything works fine. FYI had no issues with Deepstack GPU/Cuda - This is Window10 with a GTX 1650 – Has anyone got this working with Win10 and a 1650?
 
  • Wow
Reactions: sebastiantombs
CodeProject.AI team has been very open for suggestion and implementing them, not like the DeepStack team, they would put a release out and then you do not here from them for months. Below are some of the requests that I made.

I have noticed this too and this is why I've been using it now as my primary tool.

I just noticed that one of the issues I pointed out with environment variables in linux dockers for disabling modules has been mentioned in the document as being fixed in 1.5.7 (when its released). It was great to see that.
 
  • Like
Reactions: gwminor48
Anyone knows if SenseAI will work on the Jetson Nano, please? I know it has the Docker Install option, but it should take advantage of the Cuda stuff on the Nano. Thank you
Just tried the new SenseAI version on my Jetson Nano. Not working and I guess it will never work, as Jetson Nano only supports Cuda 10.2.

Just curious if it's worth to ditch the Nano with a decently running DeepStack detection to switch to SenseAI on a I7 desktop PC... I can't see any obvious improvements... I'm mostly interested in reduce false positives, that with DS and MikeLud1 custom model are frequent ...
 
Last edited: