CodeProject.AI Version 2.0

How can i get CPAI to use GPU instead of CPU, do i need to replace my custom models with ones that support GPU?

I do not have a Nvidia GPU and would like to make use of my Intel iGPU.
Click on the 3 dots at the end of the module type and then select enable GPU. The model type is dependent on the module you are using not the GPU.

Screen Shot 2023-06-17 at 10.35.49 PM.png
 
  • Like
Reactions: David L
What does "t" on the "to cancel" field do?
It forces all the real time images to be analyzed regardless of whether or not a confirmation object was found. You can use anything you want in this field as long as it is not an object in the object list for the model being used.
 
It forces all the real time images to be analyzed regardless of whether or not a confirmation object was found. You can use anything you want in this field as long as it is not an object in the confirmed object list for the model being used.
Interesting... I tried CP AI with custom models didn't used the cancel field though but was getting a lot of AI server can't reach issues and nothing found on person's. I'm wondering now if have the cancel field helps.
 
That does seem to be it, I didn't have the option for 5.7.7.5 though as I was on 5.7.7.7 so I dropped back to 5.7.6.8 and all is working again.
Below is Chat GPT answer.
To sum it up 6.2 used PT models and .NET uses ONNX models. The .NET module also works with Nvidia, AMD, and Intel GPUs and is faster. For my RTX 3090 using 6.2 it only does about 17ms and .NET does about 5ms

PT (PyTorch) and ONNX (Open Neural Network Exchange) are both frameworks used in the field of deep learning, but they serve different purposes and have different characteristics. Here are the key differences between PT and ONNX:

  1. Framework Purpose:
    • PyTorch (PT): PyTorch is a popular deep learning framework that provides a flexible and dynamic computational graph. It is widely used for building and training neural networks, conducting research, and prototyping models. PyTorch allows for easy experimentation and provides a range of tools for training and deploying models.
    • ONNX: ONNX, on the other hand, is not a deep learning framework itself but an open standard for representing trained models. ONNX serves as an intermediate format that enables interoperability between various deep learning frameworks, including PyTorch, TensorFlow, Caffe, and more. It allows models to be trained in one framework and then transferred to another for inference or deployment.
  2. Computational Graph Representation:
    • PyTorch (PT): PyTorch uses a dynamic computational graph, meaning that the graph is constructed on-the-fly during the execution of the code. This dynamic nature allows for greater flexibility in model construction, making it easier to implement complex architectures and dynamic operations.
    • ONNX: ONNX, on the other hand, uses a static computational graph. The graph is defined and optimized before runtime, allowing for efficient execution across different frameworks. ONNX provides a standardized representation that captures the structure of the model and the operations it performs.
  3. Model Portability and Interoperability:
    • PyTorch (PT): PyTorch models are primarily used within the PyTorch ecosystem. While PyTorch provides mechanisms for saving and loading models, the models are not directly portable to other deep learning frameworks without conversion.
    • ONNX: ONNX provides a standardized format for representing models, enabling interoperability between different deep learning frameworks. Models trained in PyTorch can be converted to the ONNX format and then loaded into other frameworks such as TensorFlow for inference or deployment. This portability is especially useful in production environments where different frameworks may be used for different stages of the workflow.
  4. Runtime Efficiency:
    • PyTorch (PT): PyTorch is known for its efficient execution during the training phase. It leverages dynamic graphs and provides extensive GPU support, allowing for high-performance computations on parallel hardware.
    • ONNX: ONNX is designed for efficient runtime execution. The static computational graph used by ONNX allows for optimizations and backend-specific performance improvements. ONNX models can be optimized for specific hardware platforms, leading to faster inference times.
In summary, PyTorch (PT) is a deep learning framework for model development and training, while ONNX (Open Neural Network Exchange) is an open standard for model representation, enabling interoperability between different deep learning frameworks. PT provides flexibility and dynamic graphs, while ONNX offers model portability and efficient runtime execution.

@MikeLud1

Does the .net model require a network connection as suggested by the .net postfix?

I seem to remember I tried .net but processing time outed when an Internet cable wasn't connected so I went back to 5.2 (no gpu here so just cpu processing). My processing times on mainstream are 350-450ms. I've now unticked main stream and will see if that improves. Processing times weren't an issue but obviously the quicker and less strain the better. Build 2.1.8 here with BI 5.7.7.2.

I haven't gone past 5.7.7.2 because the last time I did I needed a complete re-install to clear the issues!! Downgrading BI (not sure how it's implemented) failed to clear the issues as did downgrading CPAI. I can only presume some registry entries or systems from the upgrade were still left in place by the downgrade. It would be useful if both CPAI and BI's developers looked at their downgrade process to ensure proper clean downgrade, and also produced a clean installation tool that removes all traces of the respective App from the pc so as to open it up to a clean re-installation of a single App without having to re-install windows!!!!
 
  • Like
Reactions: David L
Post a screenshot of your "system info" tab in CPAI.


Code:
Operating System: Windows (Microsoft Windows 11 version 10.0.22621)
CPUs:             1 CPU x 6 cores. 6 logical processors (x64)
GPU:              Intel(R) UHD Graphics 630 (1,024 MiB) (Intel Corporation)
                  Driver: 31.0.101.2115
System RAM:       16 GiB
Target:           Windows
BuildConfig:      Release
Execution Env:    Native
Runtime Env:      Production
.NET framework:   .NET 7.0.3
System GPU info:
  GPU 3D Usage       0%
  GPU RAM Usage      0
Video adapter info:
  Intel(R) UHD Graphics 630:
    Adapter RAM        1,024 MiB
    Driver Version     31.0.101.2115
    Video Processor    Intel(R) UHD Graphics Family
Global Environment variables:
  CPAI_APPROOTPATH = C:\Program Files\CodeProject\AI
  CPAI_PORT        = 32168
 
To install the module if not install yet it will need an internet connection. After installing the module it should not need an internet connection to run.

Strange. I was finding without an internet connection, it timed out.

The only reason I can out this down to internet is atm I have a temporary setup whereby my server doesn't have it's own LAN, so I have to pull the cable from the back of my to plug it into the Server and vice versa, which happened at least 10 times a day. I found that the timeouts co-incided with the times the internet wasn't connected. Don't get any time outs with a LAN plugged in.
 
  • Like
Reactions: David L
I'm also not seeing a lot of difference between main and sub stream on Yolo 6.2 processing, but that could be down to the fact I'm running 4092kbs on the substream and 1080P (vs 4k and 16834kbs on main).

These are the current processing times on sub stream - I'm guessing my sub may be some users main:

Yolo v 6.2.jpg

I think the 899ms might be a windows update that dropped onto me unexpectedly last night.
 
Strange. I was finding without an internet connection, it timed out.

The only reason I can out this down to internet is atm I have a temporary setup whereby my server doesn't have it's own LAN, so I have to pull the cable from the back of my to plug it into the Server and vice versa, which happened at least 10 times a day. I found that the timeouts co-incided with the times the internet wasn't connected. Don't get any time outs with a LAN plugged in.

I had the same issue with no internet and installing the microsoft loopback adapter fixed it:

right click on window start menu icon and select Device manager. Device manager window will immediately open (or you may use any other way how to open device manager window)
click on Action, and select Add legacy hardware

click Next on welcome screen
choose "Install the hardware that i manually select from a list" and click on Next

scroll down and select Network adapters from offered common hardware types and click on Next

select Microsoft as the manufacturer, and then select Microsoft KM-TEST Loopback adapter card model, click on Next

click on Next

click on Finish
 
  • Like
Reactions: CCTVCam
I had the same issue with no internet and installing the microsoft loopback adapter fixed it:

right click on window start menu icon and select Device manager. Device manager window will immediately open (or you may use any other way how to open device manager window)
click on Action, and select Add legacy hardware

click Next on welcome screen
choose "Install the hardware that i manually select from a list" and click on Next

scroll down and select Network adapters from offered common hardware types and click on Next

select Microsoft as the manufacturer, and then select Microsoft KM-TEST Loopback adapter card model, click on Next

click on Next

click on Finish

Can you still access internet as normal after installing this with other programs eg browsers etc (I'm typing on my BI server atm to avoid having to change LAN cables (again). I don't know how many insertions the plugs are rated for, but I must be getting there!!

Also, this is something Mike might need to check on the .net model - to find out why it's accessing the internet if this is unexpected behaviour.