There is an ongoing thread about CodeProject.AI Server, but recently someone asked for a thread trimmed down to the basics: what it is, how to install, how to use, and latest changes. Here it is. This post will be updated.
What It Is
This is the main article about CodeProject.AI Server on the CodeProject site. It details what it is, what's new, what it includes, and what it can do.
How to Install
This is the main documentation page for CodeProject.AI Server which includes links to the latest version, a quick guide to setting up and running CodeProject.AI Server in Visual Studio Code or Visual Studio.
How to Use
Here are a few articles on how to use CodeProject.AI Server:
Version 2.0
Version 1.6.7
Version 1.6.6
Version 1.6.0.0
What It Is
This is the main article about CodeProject.AI Server on the CodeProject site. It details what it is, what's new, what it includes, and what it can do.
How to Install
This is the main documentation page for CodeProject.AI Server which includes links to the latest version, a quick guide to setting up and running CodeProject.AI Server in Visual Studio Code or Visual Studio.
How to Use
Here are a few articles on how to use CodeProject.AI Server:
- Setting Up a Wyze Cam with Blue Iris and CodeProject.AI Server
- How to Run CodeProject.AI Server in Docker
- CodeProject.AI Server, Blue Iris and Face Recognition
- Guide to Package Detection in CodeProject.AI with Blue Iris
- Solutions for Common Issues with Blue Iris and CodeProject.AI Server
- How to Train a Custom YOLOv5 Model to Detect Objects
- Adding a New Module to CodeProject.AI Server
- Adding a .NET AI Module to CodeProject.AI Server
Version 2.0
- 2.0.6: Corrected issues with downloadaable modules installed
- Our new Module Registry: download and install modules at runtime via the dashboard
- Improved performance for the Object Detection modules
- Optional YOLO 3.1 Object Detection module for older GPUs
- Optimised RAM use
- Support for Raspberry Pi 4+. Code and run natively directly on the Raspberry Pi using VSCode natively
- Revamped dashboard
- New timing reporting for each API call
- New, simplified setup and install scripts
- Image handling improvements on Linux, multi-thread ONNX on .NET
Version 1.6.7
- Potential memory leak addressed
Version 1.6.6
- Performance fix for CPU + video demo
Version 1.6.0.0
- For those having issues with GPUs, the dashboard now provides the means to enable / disable a module, or enable / disable GPU support. "Enabling GPU" in this case means "re-enabling GPU support that's already installed and working". We're only supporting NVidia CUDA and Apple M1/M2 GPUs at the moment, and that support must be setup at install time (our installers sniff your hardware and do what they can to get things installed)
- The dashboard is a little different. You may need to Ctrl+F5 to see the change
- For those playing with the code you will need to clean the solution (/Installers/Dev/clean.bat all, or bash clean.sh all) and the re-setup the dev environment (setup.dev.bat or bash setup.dev.sh in /Installers/Dev). We've moved things around and the setup will ensure things are in place, and ensure the new python packages are in place.
- For Blue Iris users, if you use Custom model detection, and have a Custom Model Folder specified in Blue Iris (including those who directly edited the registry), then our setup script will copy empty copies of our standard set of model files into that custom model directory so Blue Iris knows what models CodeProject.AI server can use.
We have provided Blue Iris, and any other application using CodeProject.AI server with an API that allows apps to change the settings without the need to hack the registry or mess around with config files. Blue Iris has the info and docs on this and we're hoping for an update that unlocks the disabled Blue Iris settings soon.
- For those writing modules we have improved the .NET and Python (backend) module SDK to make it a ton easier to write new modules. I'll be updating the docs and guides today.
- We've changed the port to 32168. Port 5000 is used by Universal Plug and Play so is a risk for conflicts. We already switched to 5500 on macOS due to Apple taking over 5000 in OS Monterey. We figured steering well clear of the noise and picking something easy to remember (32-16-8 - what could be easier?) would be best.
- We still listen on port 5000 (for now) in Windows and 5500 on macOS. These just aren't (and never were) guaranteed to be viable in the future.
- And finally, for those macOS users on M1 or M2 chips, our YOLO Object Detector now supports the Metal Performance Shader for the associated GPUs. It's a nice speed boost.
Last edited: