A Guide to CodeProject.AI Server

sewington

Getting the hang of it
Oct 25, 2022
25
51
Alberta, Canada
There is an ongoing thread about CodeProject.AI Server, but recently someone asked for a thread trimmed down to the basics: what it is, how to install, how to use, and latest changes. Here it is. This post will be updated.

What It Is

This is the main article about CodeProject.AI Server on the CodeProject site. It details what it is, what's new, what it includes, and what it can do.

How to Install

This is the main documentation page for CodeProject.AI Server which includes links to the latest version, a quick guide to setting up and running CodeProject.AI Server in Visual Studio Code or Visual Studio.

How to Use

Here are a few articles on how to use CodeProject.AI Server:
Latest Changes

Version 2.0

  • 2.0.6: Corrected issues with downloadaable modules installed
  • Our new Module Registry: download and install modules at runtime via the dashboard
  • Improved performance for the Object Detection modules
  • Optional YOLO 3.1 Object Detection module for older GPUs
  • Optimised RAM use
  • Support for Raspberry Pi 4+. Code and run natively directly on the Raspberry Pi using VSCode natively
  • Revamped dashboard
  • New timing reporting for each API call
  • New, simplified setup and install scripts
Version 1.6.8

  • Image handling improvements on Linux, multi-thread ONNX on .NET

Version 1.6.7
  • Potential memory leak addressed

Version 1.6.6
  • Performance fix for CPU + video demo

Version 1.6.0.0
  • For those having issues with GPUs, the dashboard now provides the means to enable / disable a module, or enable / disable GPU support. "Enabling GPU" in this case means "re-enabling GPU support that's already installed and working". We're only supporting NVidia CUDA and Apple M1/M2 GPUs at the moment, and that support must be setup at install time (our installers sniff your hardware and do what they can to get things installed)

  • The dashboard is a little different. You may need to Ctrl+F5 to see the change

  • For those playing with the code you will need to clean the solution (/Installers/Dev/clean.bat all, or bash clean.sh all) and the re-setup the dev environment (setup.dev.bat or bash setup.dev.sh in /Installers/Dev). We've moved things around and the setup will ensure things are in place, and ensure the new python packages are in place.

  • For Blue Iris users, if you use Custom model detection, and have a Custom Model Folder specified in Blue Iris (including those who directly edited the registry), then our setup script will copy empty copies of our standard set of model files into that custom model directory so Blue Iris knows what models CodeProject.AI server can use.

    We have provided Blue Iris, and any other application using CodeProject.AI server with an API that allows apps to change the settings without the need to hack the registry or mess around with config files. Blue Iris has the info and docs on this and we're hoping for an update that unlocks the disabled Blue Iris settings soon.

  • For those writing modules we have improved the .NET and Python (backend) module SDK to make it a ton easier to write new modules. I'll be updating the docs and guides today.

  • We've changed the port to 32168. Port 5000 is used by Universal Plug and Play so is a risk for conflicts. We already switched to 5500 on macOS due to Apple taking over 5000 in OS Monterey. We figured steering well clear of the noise and picking something easy to remember (32-16-8 - what could be easier?) would be best.

  • We still listen on port 5000 (for now) in Windows and 5500 on macOS. These just aren't (and never were) guaranteed to be viable in the future.

  • And finally, for those macOS users on M1 or M2 chips, our YOLO Object Detector now supports the Metal Performance Shader for the associated GPUs. It's a nice speed boost.
 
Last edited:
I can get Face and Object detection working, but when using GPU ALPR and OCR are a total fail. Docker with NO GPU everything works perfectly. Looking for Suggestions. thanks
 
Hello guys,

ive got the codeproject ai (GPU; docker) running and it takes about 10GB of RAM. Is that normal or did i do something wrong maybe? I couldnt find any info about the amount of ram being consumed when running that docker image. CoPilot is just telling 8GB for CPU Version and 12GB GPU-Memory for GPU Version but i need to know the RAM for GPU Version :-D

Depending on the answer, i might need a howto to reduce RAM amount and maybe shift it a bit to SSD if possible. But i understand that the models probably take a bunch of ram when loaded



Bonus Question: Ive got also face detection running but it barely find faces. It seems that even clear "face in front of camera" situations are not detected. Where can i maybe find better settings to adapt?
 
Last edited:
Hello guys,

ive got the codeproject ai (GPU; docker) running and it takes about 10GB of RAM. Is that normal or did i do something wrong maybe? I couldnt find any info about the amount of ram being consumed when running that docker image. CoPilot is just telling 8GB for CPU Version and 12GB GPU-Memory for GPU Version but i need to know the RAM for GPU Version :-D

Let's deal with one item at a time. Does CoPilot indicate specifically which apps are taking up RAM? Is this while you're running any other apps as well, like Blue Iris? And which modules do you have running? If you shutdown some that are unnecessary does that help?
 
ive got the codeproject ai (GPU; docker) running and it takes about 10GB of RAM
I have no idea how much memory should be used running with docker, but I'm running the non-docker installation and am using way less than 2GB of memory on both the CodeProject.AI server and the YOLOv5Net process with my GPU and Python process. I'm running over 20 camera streams through CodeProject.AI, so my memory usage is not low due to a lack of load. Here's a task-manager screenshot of my top memory usage on the computer. There must obviously be some overhead to run Docker, but that seems excessive to me.
bi-codeproject-task-manager-memory.jpg