5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Am I correct in thinking that version 5.5.8 can’t auto start Deepstack? For me the option for auto start in global AI config to change it from Code Project SenseAI is back to C:\DeepStack is greyed out.
 
Am I correct in thinking that version 5.5.8 can’t auto start Deepstack? For me the option for auto start in global AI config to change it from Code Project SenseAI is back to C:\DeepStack is greyed out.
I think if you install Code Project SenseAI Blue Iris defaults to it. If you uninstall Code Project SenseAI it will go back to normal
 
Last edited:
Yes, confirmed. Somewhat annoying that once installed, SenseAI was the only option here, and to unselect it, I had to uninstall the entire program.

Yes it’s very annoying, I would have preferred an option to easily swap between DS/SenseAI for testing purposes – At the moment (IMO) more work is needed in SenseAI around custom models and GPU support before it is ready for mainstream BI use.
 
I've had a good experience with it so far. Everything installed without any issues. I didn't have to change any settings, as I installed as a service and it used the same configuration as DeepStack. I didn't have to change any of my object classifications either.

I didn't have response time down to a science, but it seems to trigger just as fast as DeepStack. Running Optiplex 7060 with Intel i7-8700.
 
  • Like
Reactions: djernie and Tinman
Yes it’s very annoying, I would have preferred an option to easily swap between DS/SenseAI for testing purposes – At the moment (IMO) more work is needed in SenseAI around custom models and GPU support before it is ready for mainstream BI use.
The more feedback we get the better we can make it. Having said that, GPU support will be here within the week, and Custom model soon after.

We're working with Ken @ Blue Iris to ensure our senseAI provides Blue Iris (and other integrations) all the info it needs to understand how the server is going, what its capabilities are, and accessible info on what's not working. It's a living, breathing, hectic project and we're working 24/7 to make it better.
 
I think the complaints, such as they are, aren't necessarily senseAI problems but are more related to not so great integration/preparation with BI for users that are using DeepStack. Having both on my machine was a total disaster and BI didn't handle the switchover very well. I ended up uninstalling senseAI because it doesn't support a GPU or support models and when both were on the system the machine ran out of memory (i7-12700K with 32GB). I think you'll find that most BI, and VMS users, are interested in specific types of objects and high speed detection times versus scanning a large object base for laptops, giraffes or elephants. I suspect that's also true of users who really want AI for specific purposes which makes custom models an important feature.

Incidentally, when they both co-existed on my machine, senseAI and DS, the number of python instances went from the normal of 8-12 to over 52 which tied up way too much memory for one thing. I think BI needs to uninstall DS if a user installs senseAI or needs to shut down DS in that event. Even reboots didn't fix the python instance problem.

We all appreciate the work you folks put into this kind of project and my hats off to both projects for producing something that actually works so well.
 
Last edited:
The more feedback we get the better we can make it. Having said that, GPU support will be here within the week, and Custom model soon after.

We're working with Ken @ Blue Iris to ensure our senseAI provides Blue Iris (and other integrations) all the info it needs to understand how the server is going, what its capabilities are, and accessible info on what's not working. It's a living, breathing, hectic project and we're working 24/7 to make it better.

Really appreciate you coming on here and sharing with us. I'm 100% on board with using SenseAI and have completely uninstalled DeepStack. It worked fine for me, but I'd prefer to use and support the AI officially partnered with BI.

Thank you again, and I'm looking forward to seeing what's in store.
 
I thought the Code P's installer worked very well (even on W11) it downloaded all it needed on the fly and I never once got a warning about it being a virus or threat. The only issue I saw at the moment was that it never configured BI AI port . It also appears BI still looks at the "Custom models:" section even though it is greyed out in the general/AI settings. For example I had left in that box "objects:0,combined" and it I would not get any Alerts from Code p AI. Once I removed the entry it worked fine.

Screenshot 2022-06-15 132416.png
 
How much work would have to be done for the @MikeLud1 plate customization to work, or is it simply a plug and play switch?

And/or would ProjectSenseAI create an even simply plug-n-play check a check box for LPR?

And my understanding is the @MikeLud1 custom models will work this (or will soon)?
 
  • Like
Reactions: djernie
Anyone knows if SenseAI will work on the Jetson Nano, please? I know it has the Docker Install option, but it should take advantage of the Cuda stuff on the Nano. Thank you