CodeProject.AI Version 2.0

Does anyone know how to manually set the localhost in 2.4.6? (for meshing)
 
Last edited:
Does anyone know how to manually set the localhost in 2.4.6? (for meshing)


This option is not available as of yet... The mesh system is completely automatic. It uses multicast traffic (UDP) to broadcast and the default TCP port 32168 to receive requests. They are working on a system where you can manually input your own server information, but it has not materialized in testing as of yet.

As the meshing uses multicast traffic it does limit the use cases for the meshing function... E.G. you cannot use it over vpn, and is difficult (without the right hardware) if the individual servers are on different subnets.
 
This option is not available as of yet... The mesh system is completely automatic. It uses multicast traffic (UDP) to broadcast and the default TCP port 32168 to receive requests. They are working on a system where you can manually input your own server information, but it has not materialized in testing as of yet.

As the meshing uses multicast traffic it does limit the use cases for the meshing function... E.G. you cannot use it over vpn, and is difficult (without the right hardware) if the individual servers are on different subnets.

Got this to work from Chris Maunders reply on CPAI discussions.

In my case I was attempting to MESH with a docker container on my server.

You have to specify the docker host machine name in the appsettings.json.

Reposting his instructions here:

The way I've achieved this with Docker is to use a feature I haven't documented yet: KnownHostnames.

In the appsettings.json file, which is under /server, there is a section for MeshOptions

Code:
"MeshOptions": {
  "Enable": false,
  "EnableStatusBroadcast": true,
  "EnableStatusMonitoring": true,
  "AcceptForwardedRequests": true,
  "AllowRequestForwarding": true,

  "KnownMeshHostnames": [ ]
},

In the non-docker server's appsettings add the Docker host machine name (not the hostname / IP of your Docker container).

Code:
"MeshOptions": {
  "Enable": false,
  "EnableStatusBroadcast": true,
  "EnableStatusMonitoring": true,
  "AcceptForwardedRequests": true,
  "AllowRequestForwarding": true,

  "KnownMeshHostnames": [ "MY-PROXMOX-SRV" ]
},

This then allows the other machines to ping MY-PROXMOX-SRV (or whatever the machine is called) via HTTP and get the mesh info for the server insider the Docker container. This, in turn, will result in the docker container pinging the machines that just pinged it. The mesh should complete. "Should". It's a bit messy when it comes to network layers unfortunately.
 
I installed a older GPU I had sitting around and it looks like I get an error periodically through the day. Any help would be appreciated.
codeo project 1.png
codeproject 2.png

Code:
18:22:05:Object Detection (YOLOv5 6.2):  [RuntimeError] : Traceback (most recent call last):
  File "C:\Program Files\CodeProject\AI\modules\ObjectDetectionYOLOv5-6.2\detect.py", line 141, in do_detection
    det                  = detector(img, size=640)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 705, in forward
    y = self.model(x, augment=augment)  # forward
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 515, in forward
    y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 209, in forward
    return self._forward_once(x, profile, visualize)  # single-scale inference, train
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 121, in _forward_once
    x = m(x)  # run
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Program Files\CodeProject\AI\runtimes\bin\windows\python37\venv\Lib\site-packages\yolov5\models\yolo.py", line 74, in forward
    xy = (xy * 2 + self.grid[i]) * self.stride[i]  # xy
RuntimeError: The size of tensor a (30) must match the size of tensor b (24) at non-singleton dimension 2
 
I installed a older GPU I had sitting around and it looks like I get an error periodically through the day. Any help would be appreciated.
View attachment 180717

Interesting to see what driver and CUDA version you are using. I have a Evga Geoforce GTX 950 that I can't get to have recognized by the detection module. I always just see:

1703253541133.png

1703253578652.png

I'm very new to CP AI and likely just know enough info to barely stay out of trouble :) I may try changing my drivers to what you are running....
 
Interesting to see what driver and CUDA version you are using. I have a Evga Geoforce GTX 950 that I can't get to have recognized by the detection module. I always just see:

View attachment 180719

View attachment 180720

I'm very new to CP AI and likely just know enough info to barely stay out of trouble :) I may try changing my drivers to what you are running....
I don't know how to find Cuda version
cuda.png
 
Oh! I used the 528.49 installer off their page. I didn't go seek a specific driver.

Good to know, I may try that installer on my setup. Mine does seem to be working but it bugs me that it says my CPU is doing the work and not my GPU... I see no other setting to enable/disable the GPU... thanks for that info.
 
  • Like
Reactions: gwminor48
Good to know, I may try that installer on my setup. Mine does seem to be working but it bugs me that it says my CPU is doing the work and not my GPU... I see no other setting to enable/disable the GPU... thanks for that info.
I had to reinstall a couple times and power cycle my PC and it just ended up showing up. The first time I installed CP.AI it showed CPU instead of GPU being used.
 
  • Like
Reactions: David L
Interesting to see what driver and CUDA version you are using. I have a Evga Geoforce GTX 950 that I can't get to have recognized by the detection module. I always just see:

View attachment 180719

View attachment 180720

I'm very new to CP AI and likely just know enough info to barely stay out of trouble :) I may try changing my drivers to what you are running....
Also try this module
module cp.png
 
  • Like
Reactions: bradner
Hey all, first post here! So, I've gone and rewritten the Coral TPU implementation. I've implemented image tiling, thread-safety, pipelining, segmented TPU pipeline support, and multi-TPU support. (I may have gone a bit overboard.) It should run stupidly faster and scale well with each additional TPU.

However, my hardware hasn't arrived so I haven't even tested the damn thing. Not even once. I'm likely making some stupid assumptions in this code. There is zero chance of it running without errors. The hardware is supposed to arrive next week and then I'm immediately taking the family on holiday for a week. So. In the interests of not simply sitting on this indefinitely, is there anyone who'd be passionate about debugging this? DM me.
 
Last edited:
Hey all, first post here! So, I've gone and rewritten the Coral TPU implementation. I've implemented image tiling, thread-safety, pipelining, segmented TPU pipeline support, and multi-TPU support. (I may have gone a bit overboard.) It should run stupidly faster and scale well with each additional TPU.

However, my hardware hasn't arrived so I haven't even tested the damn thing. Not even once. I'm likely some stupid assumptions in this code. The hardware is supposed to arrive next week and then I'm immediately taking the family on holiday for a week. So. In the interests of not simply sitting on this indefinitely, is there anyone who'd be passionate about debugging this? DM me.

I’d be happy to test the module


Sent from my iPhone using Tapatalk
 
I’d be happy to test the module


Sent from my iPhone using Tapatalk
Hey all, first post here! So, I've gone and rewritten the Coral TPU implementation. I've implemented image tiling, thread-safety, pipelining, segmented TPU pipeline support, and multi-TPU support. (I may have gone a bit overboard.) It should run stupidly faster and scale well with each additional TPU.

However, my hardware hasn't arrived so I haven't even tested the damn thing. Not even once. I'm likely some stupid assumptions in this code. The hardware is supposed to arrive next week and then I'm immediately taking the family on holiday for a week. So. In the interests of not simply sitting on this indefinitely, is there anyone who'd be passionate about debugging this? DM me.
I am also waiting for my dual edge tpu and the pci adapter to arrive. I'd be willing to give it a shot, too.

Gesendet von meinem Pixel 6 Pro mit Tapatalk
 
Question, CPAi 2.4.6-Beta, do you have all the modules (LPR and Object Detection (Yolov5.NET) running on all your mesh servers and your Blue Iris main machine or just one machine?
 
You need to have the same modules installed and enabled on all the mesh servers.
I have Yolov5.net enabled on my BI box, since it has no graphics card. But I have Yolov5.6.2 enabled on my other PC since it does have a decent graphics card. Mesh looks like it is working...Is this not the way to do it? Should I change the 2nd PC back to Yolov5.net also?
 
I have Yolov5.net enabled on my BI box, since it has no graphics card. But I have Yolov5.6.2 enabled on my other PC since it does have a decent graphics card. Mesh looks like it is working...Is this not the way to do it? Should I change the 2nd PC back to Yolov5.net also?
So enlighten me, all this new Mesh networking with CPAI is just utilizing other PC's Processors/GPUs? Wouldn't this mean you have to keep those PC's on 24/7 like an AP (Access Point) network? Sounds like a good solution as long as your PC's don't go to sleep/hibernate.
 
  • Like
Reactions: gwminor48