Blue Iris and DeepStack ALPR

Yes. However I just now changed the logon account from local user to one with elevated privileges. When I run deepstackalpr from the command prompt I see permission errors. Fingers crossed.
 
I'm getting very good images in ALPR.jpg and car.jpg; but my ALPR (virtual) camera on Blue Iris is just a gray screen that says "NON IMAGE DATA".
Virtual camera is set to Network IP on my Blue Iris Server port 81, res 160x160, 15 FPS same as LPR primary camera, trigger Hi res JPG files, 0.1s and 1s break times, no AI; Alerts set to fire when triggered, all motion zones, On Alert Action to overwrite file path.txt with &ALERT_PATH as the argument.
The alerts are showing high confidences (70-90%) and I can read the plates, but there is no LPR/feedback on the alerts; no logs in the Alerts folder on ALPR (virtual) camera,
Here's what I'm getting in the DeepStackALPR log. Any suggestions?
1647820194064.png

Blue Iris log is showing the ALPR Object with message "Signal: network retry" continuously

Edit: Rather than using my local (10.0.x.x) IP for the BI Server, I changed it to 127.0.0.1:81 which gave me a new error "Unauthorized"; added the User and Password for my BI User and now it shows the plate on the virtual ALPR feed. Awaiting to see if Alerts will show the plate now

edit 2: alerts still not showing plates

edit 3: all the alerts are showing NCF
 
Last edited:
This is pretty amazing. I have it up and running to a point . I am getting some awesome cropped tags. But setting this up has killed deepstack working on all my other cameras. When I follow the steps given earlier using the shell . this is what I get.


OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted: ('127.0.0.1', 5000)
[6160] Failed to execute script 'DeepStackALPR' due to unhandled exception!

i tried changing deepstack to 127.0.0.2 but didn't seem to help.



I changed the port to 5001 and the error seems to have went away. will have to see from here if the rest of my stuff comes back working.

spoke too soon. DSALPR wasn't running in the task manager . running now getting getting the same oserror.
 
Last edited:
This is pretty amazing. I have it up and running to a point . I am getting some awesome cropped tags. But setting this up has killed deepstack working on all my other cameras. When I follow the steps given earlier using the shell . this is what I get.


OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted: ('127.0.0.1', 5000)
[6160] Failed to execute script 'DeepStackALPR' due to unhandled exception!

i tried changing deepstack to 127.0.0.2 but didn't seem to help.



I changed the port to 5001 and the error seems to have went away. will have to see from here if the rest of my stuff comes back working.

spoke too soon. DSALPR wasn't running in the task manager . running now getting getting the same oserror.
If you are running DeepStackALPR.exe in the command prompt you need to stop the DeepStackALPR service so you do not use port 5000 twice.
 
  • Like
Reactions: woolfman72
If you are running DeepStackALPR.exe in the command prompt you need to stop the DeepStackALPR service so you do not use port 5000 twice.
Thank you. the example below of what I got one single time. Great job on this. I've read through the thread and I see another person posted about the permission issue but I couldn't find a solution to the car.jpg permission issue. I've deleted the car.jpg, rebooted but no change.
 

Attachments

  • ALPR.20220415_173231641.9.jpg
    ALPR.20220415_173231641.9.jpg
    51.5 KB · Views: 18
  • Untitled.png
    Untitled.png
    176.7 KB · Views: 19
Last edited:
MikeLud1, you seem to be the resident custom Deepstack model expert. I have a slightly off-topic question, but I hope you can help. I'm trying to create a custom model and have already created my dataset of images and labels. I was able to start the deepstack-training using Google Colab and it appeared to be working, but before it could even get through the 300 epochs, it said I ran out of GPU and need to switch to a Pro account.

I then tried switching to doing the training on my PC. I set-up CUDA 11.3 and CUDNN and started the training. It returns with an error of:
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

Other websites have said this may because deepstack-trainer needs to use an older version of pytouch and was told to run "pip3 install torch==1.7.1+cpu torchvision==0.8.2+cpu -f ".
ERROR: Could not find a version that satisfies the requirement torch==1.7.1+cpu (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu113, 1.11.0+cu115)
ERROR: No matching distribution found for torch==1.7.1+cpu


Can you tell me anything about how you got your training environment set up? Any hints on what I can do to resolve this?
 
MikeLud1, you seem to be the resident custom Deepstack model expert. I have a slightly off-topic question, but I hope you can help. I'm trying to create a custom model and have already created my dataset of images and labels. I was able to start the deepstack-training using Google Colab and it appeared to be working, but before it could even get through the 300 epochs, it said I ran out of GPU and need to switch to a Pro account.

I then tried switching to doing the training on my PC. I set-up CUDA 11.3 and CUDNN and started the training. It returns with an error of:
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

Other websites have said this may because deepstack-trainer needs to use an older version of pytouch and was told to run "pip3 install torch==1.7.1+cpu torchvision==0.8.2+cpu -f ".
ERROR: Could not find a version that satisfies the requirement torch==1.7.1+cpu (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu113, 1.11.0+cu115)
ERROR: No matching distribution found for torch==1.7.1+cpu


Can you tell me anything about how you got your training environment set up? Any hints on what I can do to resolve this?
How many epochs are you getting before the error, you can try reducing the epochs to 150 by using this command "!python3 train.py --dataset-path "my-dataset" epochs 150". Do you have a Nvidia GPU to do the training locally, if not the CPU can take days to do the training. The some of the models I trained I use as low 60 epochs. I am training locally using a GeForce RTX 3060 Ti and it takes up to 12 hours to train with 60 epochs with about 10,000 images. If you are using less images it will take less time to train.
 
How many epochs are you getting before the error, you can try reducing the epochs to 150 by using this command "!python3 train.py --dataset-path "my-dataset" epochs 150". Do you have a Nvidia GPU to do the training locally, if not the CPU can take days to do the training. The some of the models I trained I use as low 60 epochs. I am training locally using a GeForce RTX 3060 Ti and it takes up to 12 hours to train with 60 epochs with about 10,000 images. If you are using less images it will take less time to train.
I think it was about 75 epochs when it quit on Google Colab. But I'd really like to do it locally anyway which is why I switched to trying to install it locally (GPU is 1050Ti).
The problem is I can't get it to run and I get the errors I listed before. Could you tell me what version of Cuda you are using? Also the version of pytouch?

I tried Cuda 11.3 and pytouch 1.11 and got that first error. Then I tried to downgrade to pytouch 1.07 and it gave that second error.

Finally if I can get past that and get the training running locally, you seem to have a nice GUI running with graphs, etc. How did you do that?
 
I think it was about 75 epochs when it quit on Google Colab. But I'd really like to do it locally anyway which is why I switched to trying to install it locally (GPU is 1050Ti).
The problem is I can't get it to run and I get the errors I listed before. Could you tell me what version of Cuda you are using? Also the version of pytouch?

I tried Cuda 11.3 and pytouch 1.11 and got that first error. Then I tried to downgrade to pytouch 1.07 and it gave that second error.

Finally if I can get past that and get the training running locally, you seem to have a nice GUI running with graphs, etc. How did you do that?
To install pytouch use this command
Code:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
Also make sure you use this command "pip install -r requirements.txt" when in the "deepstack-trainer" folder before training.
 
Last edited:
To install pytouch use this command
Code:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
Also make sure you use this command "pip install -r requirements.txt" when in the "deepstack-trainer" folder before training.
The root cause was I had Python 3.10 installed. I downgraded to 3.7 and now I've gotten past all the dependency issues. So I guess you must have been using Python 3.7 too?

The new stopping point is in LoadImagesAndLabels.collate_fn. I'll have to track that down but at least I'm starting to make progress. I know this was working in Google Colab, so I have to assume my dataset is correct. Not exactly sure what to do next. But thank you for reminding me of requirements.txt. I had missed that step.
 
To install pytouch use this command
Code:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
Also make sure you use this command "pip install -r requirements.txt" when in the "deepstack-trainer" folder before training.
Okay, changed the DataLoader to single threading (number of workers = 0) and that got past the multi-threading issue with the data loading function. I also set batch size to 8. I'm finally starting to see it process! Speed and the number of epochs is going to be the next issue. I'll have to see how it progresses with my 1050Ti.
 
Okay, changed the DataLoader to single threading (number of workers = 0) and that got past the multi-threading issue with the data loading function. I also set batch size to 8. I'm finally starting to see it process! Speed and the number of epochs is going to be the next issue. I'll have to see how it progresses with my 1050Ti.
With your GPU you can probably get away with a batch size of 12, I am using 24 with my 3060Ti
 
With your GPU you can probably get away with a batch size of 12, I am using 24 with my 3060Ti
Thanks! I really appreciate your taking the time to help me. I tried higher, but I ran out of CUDA memory, so leaving it at 8. I have about 1500 images, 10 classes and trying 60 epochs. Looks like about 5 hours of time to complete. I may go back to using Google Colab, but the only problem is if it times out, my training results go away which makes that time worthless. I wonder if there is a way to have that tied to Drive so the results are still around even if the session is clobbered.
 
Thanks! I really appreciate your taking the time to help me. I tried higher, but I ran out of CUDA memory, so leaving it at 8. I have about 1500 images, 10 classes and trying 60 epochs. Looks like about 5 hours of time to complete. I may go back to using Google Colab, but the only problem is if it times out, my training results go away which makes that time worthless. I wonder if there is a way to have that tied to Drive so the results are still around even if the session is clobbered.
You can also try using
Code:
--model "yolov5s"
Instead of
Code:
--model "yolov5m"