CodeProject.AI Version 2.0

Thanks for the video, very cool, didn't know i can set up that many trip wires.

I used 2 trip zones on each camera so was able to set up 2 separate sets on each - this gets around the limit for the max number of lines you're allowed to draw.

The limit might not be an issue. However, if you have more than distinct area you need to cover, on a camera, then multiple zones might be the way to go. eg on my rear yard camera, I need to cover the yard, but also a bay top which is potentially accessible from a nearby roof. So I have 1 zone drawn across the yard, and another zone drawn across the bay top to catch anyone getting access to the bay and attempting to get house access from there. I also have wires in Zone 2 drawn so as to protect from someone climbing the downpipes.

Drawing the best patterns takes practice, but eventually you figure it out.

I have mine triggering reliably on both cameras and then ALL alerts go through AI for verification before triggering recording and alert messages. (I have continuos + alerts set on recording so I still have the continuous recording of the 2nd stream to fall back on should an alert be missed although with AI confidence at 50% due to the small number of objects the cameras are looking for, I find reliability 100%).
 
I don't think any learning is happening with CPAI. The models you are using are already trained and "learned". You will have to train your own models to use with CPAI for each camera's FOV if you really want to get false alerts down to a bare minimum and get accuracy up as high as possible.

Honestly, I've given up on CPAI for a while. I've found Dahua IVS coupled with Human detection works for my needs. I never miss a human or car coming into my driveway with IVS like I did with CPAI.
I’m with you on that. I’m slowly going over to in camera’s AI. But, these older cameras are still in good shape. I have the old one’s working like the old DS, so far. On the old ones I‘m using the IPCams-general with 2.0.8 CP.AI. Now, when I get an alert, it’s most likely a person or vehicle.
 
  • Like
Reactions: David L
I have been away for a few months. I am ready to upgrade from deepstack to Code Project AI. Is there a list of instructions I can follow to upgrade? Thanks!
 
That is correct

Maybe there should be a button that enables you to click on a clip and then add the trigger images within a clip to CPAI to aid learning. This would enable low confidence but correct images to be added easily to the database. The only issue I see is with images containing licence plates especially if the addition was to the central ai database for all users and not just local. Then again central might need vetting anyway to prevent people from spurious or inaccurate reporting.
 
  • Like
Reactions: David L
I have been away for a few months. I am ready to upgrade from deepstack to Code Project AI. Is there a list of instructions I can follow to upgrade? Thanks!
there should be info in this thread. Also a video on YouTube. "The HookUp" I belive.
 
  • Like
Reactions: Futaba
Quick question regarding Coral. I have 2.1.4 running on a Raspberry PI, and it's working well. But, now none of Mike's custom/tuned models are available, only the "ObjectDetectionTFLite" one. Which works fine for "person," but hasn't caught my dogs or vehicles as well as Mike's custom ones did.

@MikeLud1 Is that the motivation for your Orange Pi testing? Or am I misunderstanding and I can still use the ipcam-combined, etc. models? Thank you!

Hello everyone,
unfortunately, it seems that Docker Build X86 with Coral is not working yet. So, I will probably set everything up using Raspberry Pi.
Nevertheless, I noticed the post while browsing the thread. Was that just a mistake or are there really no custom models for the Raspberry Pi?
Sorry for my poor English.

Greetings from Germany
Jan
 
  • Like
Reactions: David L
The last few days all of my alerts have started being cancelled as "occupied". When I review the AI analysis the moving vehicle is identified in the first couple of sample images with a confidence greater then my setting, then after the vehicle leaves the frame the sample images start showing the gold clock/occupied icon in the analysis window. I was still running CP.AI 2.0.8 so I upgraded it to 2.1.9 since BI was already running the latest release. I tried unchecking detect static objects and when I review the analysis on those after the vehicle leaves the frame it detects nothing, but rather then confirming the vehicle it identified in the early samples the alert just results in "nothing found". I tried unchecking use main stream and the problem has persisted. Any ideas?
 
The last few days all of my alerts have started being cancelled as "occupied". When I review the AI analysis the moving vehicle is identified in the first couple of sample images with a confidence greater then my setting, then after the vehicle leaves the frame the sample images start showing the gold clock/occupied icon in the analysis window. I was still running CP.AI 2.0.8 so I upgraded it to 2.1.9 since BI was already running the latest release. I tried unchecking detect static objects and when I review the analysis on those after the vehicle leaves the frame it detects nothing, but rather then confirming the vehicle it identified in the early samples the alert just results in "nothing found". I tried unchecking use main stream and the problem has persisted. Any ideas?
I think there is an issue with BI, any version past 5.7.7.5 I saw AI issues like yours. I have not email Ken yet on the issue. Try downgrading to version 5.7.7.5

1686260628535.png
 
That does seem to be it, I didn't have the option for 5.7.7.5 though as I was on 5.7.7.7 so I dropped back to 5.7.6.8 and all is working again.
 
Hi,

I am having to manually install the ALPR module as the server GUI does not work on Windows direct or Docker or Linux Docker there is something wrong as the installer does not complete the process and download the models unless install is done manually which I have done successfully on Windows. However I want to run CAI in Linux Docker and I have managed to execute the ./../../setup.sh manually from inside ALPR directory in linux docker however getting urllib3 error during install and the installation.

In the log ALPR starts up and immediately shuts down. I think it could be that it does not finish installing properly.

Install Log:
`Setting up CodeProject.AI Development Environment
======================================================================
CodeProject.AI Installer

======================================================================
Checking GPU support
CUDA Present...No
Allowing GPU Support: Yes
Allowing CUDA Support: Yes
General CodeProject.AI setup
Creating Directories...Done
Installing module ALPR
Python 3.8 is already installed
Virtual Environment already present
Checking for Python 3.8...Found Python 3.8.16. present
Checking for CUDA...Not found
Ensuring PIP is installed...Done
Updating PIP...Done
Installing setuptools...Done
Choosing packages from requirements.linux.txt
Installing Packages into Virtual Environment...ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
botocore 1.29.135 requires urllib3<1.27,>=1.25.4, but you have urllib3 2.0.3 which is incompatible.
google-auth 2.18.0 requires urllib3<2.0, but you have urllib3 2.0.3 which is incompatible.
Success
Checking for CUDA...Not found
Ensuring PIP is installed...Done
Updating PIP...Done
Installing setuptools...Done
Choosing packages from requirements.txt
Installing Packages into Virtual Environment...ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
botocore 1.29.135 requires urllib3<1.27,>=1.25.4, but you have urllib3 2.0.3 which is incompatible.
google-auth 2.18.0 requires urllib3<2.0, but you have urllib3 2.0.3 which is incompatible.
Success
Downloading OCR models... already exists...Expanding...Done.
Applying PaddleOCR patch
Module setup complete `


Log ALPR Starts Up and Immediately Shuts down.

23:22:44:Command: /app/modules/ALPR/bin/linux/python38/venv/bin/python3
23:22:44:Starting /app...es/ALPR/bin/linux/python38/venv/bin/python3 "/app/modules/ALPR/ALPR_adapter.py"
23:22:44:
23:22:44:Attempting to start ALPR with /app/modules/ALPR/bin/linux/python38/venv/bin/python3 "/app/modules/ALPR/ALPR_adapter.py"
23:22:44:
23:22:44:Module 'License Plate Reader' (ID: ALPR)
23:22:44:Module Path: /app/modules/ALPR
23:22:44:AutoStart: True
23:22:44:Queue: alpr_queue
23:22:44Platforms: windows,linux,macos,macos-arm64
23:22:44:GPU: Support disabled
23:22:44Parallelism: 0
23:22:44:Accelerator:
23:22:44:Half Precis.: enable
23:22:44:Runtime: python38
23:22:44:Runtime Loc: Local
23:22:44:FilePath: ALPR_adapter.py
23:22:44Pre installed: False
23:22:44:Start pause: 1 sec
23:22:44:LogVerbosity:
23:22:44:Valid: True
23:22:44:Environment Variables
23:22:44:AUTO_PLATE_ROTATE = True
23:22:44PLATE_CONFIDENCE = 0.7
23:22:44PLATE_RESCALE_FACTOR = 2
23:22:44PLATE_ROTATE_DEG = 0
23:22:44:
23:22:44:Started License Plate Reader module
23:22:45:Module ALPR has shutdown
 
Last edited:
Mike:
Any word on if 5.7.7.11 fixes these isssues that 5.7.7.7 had?
 
I am on 5.7.7.10 now and the issue is fixed. Haven't gotten the update for 5.7.7.11 yet but I would hope it remains fixed in that version.
 
For curiosity sake, what's the acceptable reaction time please?

Below sample is from my BI
AI: [Objects] car:54% [2143,975 2680,1554] 566ms
AI: [Objects] car:74% [1194,312 1912,843] 573ms
AI: [Objects] person:80% [2226,1050 2622,1556] 627ms
AI: [Objects] person:89% [172,853 580,1614] 1328ms
AI: [Objects] person:80% [801,335 945,559] 3219ms
AI: [Objects] person:81% [2294,511 2519,651] 6528ms

Is there a way to get the reaction time to a smaller # ?
 
For curiosity sake, what's the acceptable reaction time please?

Below sample is from my BI
AI: [Objects] car:54% [2143,975 2680,1554] 566ms
AI: [Objects] car:74% [1194,312 1912,843] 573ms
AI: [Objects] person:80% [2226,1050 2622,1556] 627ms
AI: [Objects] person:89% [172,853 580,1614] 1328ms
AI: [Objects] person:80% [801,335 945,559] 3219ms
AI: [Objects] person:81% [2294,511 2519,651] 6528ms

Is there a way to get the reaction time to a smaller # ?
To get better times do the below.

Make sure Use main stream if available is unchecked
1686697466621.png

Use Object Detection (YOLOv5 .NET) Module and make sure GPU is Enabled
1686697823280.png
 
  • Like
Reactions: David L
Below is Chat GPT answer.
To sum it up 6.2 used PT models and .NET uses ONNX models. The .NET module also works with Nvidia, AMD, and Intel GPUs and is faster. For my RTX 3090 using 6.2 it only does about 17ms and .NET does about 5ms

PT (PyTorch) and ONNX (Open Neural Network Exchange) are both frameworks used in the field of deep learning, but they serve different purposes and have different characteristics. Here are the key differences between PT and ONNX:

  1. Framework Purpose:
    • PyTorch (PT): PyTorch is a popular deep learning framework that provides a flexible and dynamic computational graph. It is widely used for building and training neural networks, conducting research, and prototyping models. PyTorch allows for easy experimentation and provides a range of tools for training and deploying models.
    • ONNX: ONNX, on the other hand, is not a deep learning framework itself but an open standard for representing trained models. ONNX serves as an intermediate format that enables interoperability between various deep learning frameworks, including PyTorch, TensorFlow, Caffe, and more. It allows models to be trained in one framework and then transferred to another for inference or deployment.
  2. Computational Graph Representation:
    • PyTorch (PT): PyTorch uses a dynamic computational graph, meaning that the graph is constructed on-the-fly during the execution of the code. This dynamic nature allows for greater flexibility in model construction, making it easier to implement complex architectures and dynamic operations.
    • ONNX: ONNX, on the other hand, uses a static computational graph. The graph is defined and optimized before runtime, allowing for efficient execution across different frameworks. ONNX provides a standardized representation that captures the structure of the model and the operations it performs.
  3. Model Portability and Interoperability:
    • PyTorch (PT): PyTorch models are primarily used within the PyTorch ecosystem. While PyTorch provides mechanisms for saving and loading models, the models are not directly portable to other deep learning frameworks without conversion.
    • ONNX: ONNX provides a standardized format for representing models, enabling interoperability between different deep learning frameworks. Models trained in PyTorch can be converted to the ONNX format and then loaded into other frameworks such as TensorFlow for inference or deployment. This portability is especially useful in production environments where different frameworks may be used for different stages of the workflow.
  4. Runtime Efficiency:
    • PyTorch (PT): PyTorch is known for its efficient execution during the training phase. It leverages dynamic graphs and provides extensive GPU support, allowing for high-performance computations on parallel hardware.
    • ONNX: ONNX is designed for efficient runtime execution. The static computational graph used by ONNX allows for optimizations and backend-specific performance improvements. ONNX models can be optimized for specific hardware platforms, leading to faster inference times.
In summary, PyTorch (PT) is a deep learning framework for model development and training, while ONNX (Open Neural Network Exchange) is an open standard for model representation, enabling interoperability between different deep learning frameworks. PT provides flexibility and dynamic graphs, while ONNX offers model portability and efficient runtime execution.
 
Thank you for the info.

I made the changes like you suggested.

This is my current reaction time now

AI: [Objects] car:66% [690,334 1095,485] 5667ms

This is far from 6.2 and 17ms

May I ask if I am doing anything wrong?

I have 2 IPC-Color4K-T and a MINI PTZ. I don't think camera models matters, but just in case.

Please see attached picture of my pc spec and a confirmation that I have enable the correct one.

Thank you for your help.

Below is Chat GPT answer.
To sum it up 6.2 used PT models and .NET uses ONNX models. The .NET module also works with Nvidia, AMD, and Intel GPUs and is faster. For my RTX 3090 using 6.2 it only does about 17ms and .NET does about 5ms

PT (PyTorch) and ONNX (Open Neural Network Exchange) are both frameworks used in the field of deep learning, but they serve different purposes and have different characteristics. Here are the key differences between PT and ONNX:

  1. Framework Purpose:
    • PyTorch (PT): PyTorch is a popular deep learning framework that provides a flexible and dynamic computational graph. It is widely used for building and training neural networks, conducting research, and prototyping models. PyTorch allows for easy experimentation and provides a range of tools for training and deploying models.
    • ONNX: ONNX, on the other hand, is not a deep learning framework itself but an open standard for representing trained models. ONNX serves as an intermediate format that enables interoperability between various deep learning frameworks, including PyTorch, TensorFlow, Caffe, and more. It allows models to be trained in one framework and then transferred to another for inference or deployment.
  2. Computational Graph Representation:
    • PyTorch (PT): PyTorch uses a dynamic computational graph, meaning that the graph is constructed on-the-fly during the execution of the code. This dynamic nature allows for greater flexibility in model construction, making it easier to implement complex architectures and dynamic operations.
    • ONNX: ONNX, on the other hand, uses a static computational graph. The graph is defined and optimized before runtime, allowing for efficient execution across different frameworks. ONNX provides a standardized representation that captures the structure of the model and the operations it performs.
  3. Model Portability and Interoperability:
    • PyTorch (PT): PyTorch models are primarily used within the PyTorch ecosystem. While PyTorch provides mechanisms for saving and loading models, the models are not directly portable to other deep learning frameworks without conversion.
    • ONNX: ONNX provides a standardized format for representing models, enabling interoperability between different deep learning frameworks. Models trained in PyTorch can be converted to the ONNX format and then loaded into other frameworks such as TensorFlow for inference or deployment. This portability is especially useful in production environments where different frameworks may be used for different stages of the workflow.
  4. Runtime Efficiency:
    • PyTorch (PT): PyTorch is known for its efficient execution during the training phase. It leverages dynamic graphs and provides extensive GPU support, allowing for high-performance computations on parallel hardware.
    • ONNX: ONNX is designed for efficient runtime execution. The static computational graph used by ONNX allows for optimizations and backend-specific performance improvements. ONNX models can be optimized for specific hardware platforms, leading to faster inference times.
In summary, PyTorch (PT) is a deep learning framework for model development and training, while ONNX (Open Neural Network Exchange) is an open standard for model representation, enabling interoperability between different deep learning frameworks. PT provides flexibility and dynamic graphs, while ONNX offers model portability and efficient runtime execution.
 

Attachments

  • CPAI.jpg
    CPAI.jpg
    182.5 KB · Views: 32