IP Cam Talk Custom Community DeepStack Model

If you want to give it a try, you need to install CUDA & cuDNN and follow the Local Setup in the below link. I am using a RTX3060 ti for local training.

Congrats for finding a good deal on a 3090!

I spent some time to get training set up. I have verified that pytorch is installed correctly.

Code:
Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor([[0.6296, 0.4324, 0.1697],
        [0.9212, 0.9805, 0.0763],
        [0.6045, 0.9442, 0.6604],
        [0.1562, 0.1943, 0.2128],
        [0.8101, 0.6730, 0.9656]])
>>> torch.cuda.is_available()
True

I followed the instructions in tutorial.ipynb and I got:

Code:
>>> from IPython.display import Image, clear_output
>>> clear_output()
>>> print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
Setup complete. Using torch 1.11.0+cu113 _CudaDeviceProperties(name='NVIDIA GeForce RTX 3090', major=8, minor=6, total_memory=24575MB, multi_processor_count=82)
>>>

But haven't been successfully in running detect.py.

Should I clone yolov5 inside \deepstack-trainer or at the same level as \deepstack-trainer?
Where should I run this?
Code:
!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)

I get errors if I run "python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/" in cmd.exe.
 
Congrats for finding a good deal on a 3090!

I spent some time to get training set up. I have verified that pytorch is installed correctly.

Code:
Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor([[0.6296, 0.4324, 0.1697],
        [0.9212, 0.9805, 0.0763],
        [0.6045, 0.9442, 0.6604],
        [0.1562, 0.1943, 0.2128],
        [0.8101, 0.6730, 0.9656]])
>>> torch.cuda.is_available()
True

I followed the instructions in tutorial.ipynb and I got:

Code:
>>> from IPython.display import Image, clear_output
>>> clear_output()
>>> print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
Setup complete. Using torch 1.11.0+cu113 _CudaDeviceProperties(name='NVIDIA GeForce RTX 3090', major=8, minor=6, total_memory=24575MB, multi_processor_count=82)
>>>

But haven't been successfully in running detect.py.

Should I clone yolov5 inside \deepstack-trainer or at the same level as \deepstack-trainer?
Where should I run this?
Code:
!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)

I get errors if I run "python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/" in cmd.exe.
Try the below without the "!"
Code:
python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
 
Try the below without the "!"
Code:
python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/

I thought so. I did that and I got an AttributeError:
[the editor doesn't let me insert this output, some characters it doesn't like]

1654758218308.png
 
Last edited:
I thought so. I did that and I got an AttributeError:
[the editor doesn't let me insert this output, some characters it doesn't like]

View attachment 130258
@Futaba I found DeepStack-Trainer is Python and package version sensitive. What I did to address this is I installed Anaconda and created a dedicated environment for DeepStack-Trainer. Later tonight I will post steps on how to do this.
 
@Futaba I found DeepStack-Trainer is Python and package version sensitive. What I did to address this is I installed Anaconda and created a dedicated environment for DeepStack-Trainer. Later tonight I will post steps on how to do this.
That would be awesome!
 
That would be awesome!
@Futaba follow the below steps to install Anaconda and get Deepstack-Trainer to work

Step 1: Delete C:\deepstack-trainer just in case it is corrupt.

Step 2: Install Anaconda use this link Anaconda to download Anaconda and install Anaconda with the default settings.

Step 3: After installing Anaconda open Anaconda Navigator from the Start Menu. After opening Anaconda Navigator it will ask you to upgrade, do the upgrade.

1654819170172.png

Step 4: Reopen Anaconda Navigator and click on Environments and use Import that is at the bottom of the screen to import the file that is in the attached zip file

1654819548849.png

Step 5: After Importing DeepStack-Trainer Environment click on the play button and open Terminal

1654820744753.png

Step 6: Execute the below commands to reinstall DeepStack-Trainer (ONLY NEEDED TO DO THIS STEP ONCE)
Code:
cd \
Code:
git clone https://github.com/johnolafenwa/deepstack-trainer

Step 7: Execute the below commands to train a custom model (I just PMed you with a link to my Google Drive where you can download the Dark Dataset so you can test DeepStack-Trainer, file is to large to attach. Unzip the files to C:\)
Code:
cd \deepstack-trainer
Code:
python train.py --model "yolov5s" --batch-size 64 --epochs 2 --dataset-path "\dark"
 

Attachments

With @MikeLud1's help over PM, I can build a simple test model now. It gave my RTX 3090 a good work out! Look forward to helping out with building custom models.

1654831596060.png
 
I figured out how to download the images that DeepStack used to train the object model. With these images I can start to make the community custom DeepStack model, below are the steps to create the custom model.

The first step would be to create a new DeepStack custom model using the same image that DeepStack used but removing all the labels we do not want. Below is a list of the labels that I think we should keep and a list of labels we should remove. Please let me know if I should change the lists.

The second step would be to start adding new labels with images that everyone contributes to the custom models.

Thanks to @105437 for contributing images.
Combined LabelsGeneral Labels
(includes dark models images)
Animal LabelsDark LabelsLabels removed from Deepstack original model
person, bicycle, car, motorcycle, bus, truck, bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig

Future Labels:
coyote, possum
bicycle,bus,car,cat,dog,
motorcycle,person,truck
bird, cat, dog, horse, sheep, cow, bear,
deer, rabbit,
raccoon, fox, skunk, squirrel, pig


Future Labels:
coyote, possum
Bicycle, Bus, Car, Cat, Dog, Motorcycle, & Personairplane, train, boat, traffic light, fire hydrant, stop sign, parking meter, bench, elephant, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair dryer, toothbrush
Combined V2.0 Training Results:​
General V2.1 (includes dark models images) Training Results:​
Animal V2.0 Training Results:​
Dark V1.0 Training Results:​
Hey can you try to add turkey to the list
 

Attachments

  • turkey.jpg
    turkey.jpg
    1.4 MB · Views: 20
  • turkey2.jpg
    turkey2.jpg
    1.1 MB · Views: 19
  • turkey3.jpg
    turkey3.jpg
    1 MB · Views: 17
  • Haha
Reactions: sebastiantombs
Update: I just posted a new version of general V3.1 in the first post

Last night I tried to see how a model will train if I combine (bicycle, bus, car, motorcycle, & truck) into one label "vehicle". Results we good, attached is general V3.0, this version does not include the dark image. I will include the dark images in V3.1.

1654965511312.png1654965549097.png
 
Last edited:
@MikeLud1 I ran the general, V2.0, model overnight on my two problem child cameras. It actually performed quite well. The west camera detected about five more times versus the dark model and the east camera got them all. So far daytime is working at 100%. I've enabled it on another, indoor, camera to see how it does with people. Detection times are very consistent between day and night which is a nice thing.
@sebastiantombs do you have about 50 to 100 images (without detection boxes) from the problem child cameras so I can add them to General V3.1?
 
  • Like
Reactions: looney2ns
@MikeLud1 I'll pull some together a little later today. DM sent.
 
  • Like
Reactions: Futaba and MikeLud1
I have DS set for 40% confidence, day and night, 5 to 10 images depending on camera FOV, typically 250ms intervals, not using the main stream. I don't flag the alerts and also keep non-verified alerts just to make sure I catch everything. No system is perfect. To get the snapshots for Mike I used the full files recorded 24/7 by the "main" camera. All my DS cameras are hidden clones I have in a group, oddly, named "hidden" for convenience.
 
I have DS set for 40% confidence, day and night, 5 to 10 images depending on camera FOV, typically 250ms intervals, not using the main stream. I don't flag the alerts and also keep non-verified alerts just to make sure I catch everything. No system is perfect. To get the snapshots for Mike I used the full files recorded 24/7 by the "main" camera. All my DS cameras are hidden clones I have in a group, oddly, named "hidden" for convenience.
Why are you cloning the cameras for DS
 
The DS cameras only record on motion and that also allows a lot of tinkering with motion detection and so on without disturbing the main camera settings. All of the DS cameras are at least dual purpose, some triple purpose, cameras to watch for different type of events or things. Using clones makes that really easy to do and keep separate so that events can be easily segregated and reviewed.
 
The DS cameras only record on motion and that also allows a lot of tinkering with motion detection and so on without disturbing the main camera settings. All of the DS cameras are at least dual purpose, some triple purpose, cameras to watch for different type of events or things. Using clones makes that really easy to do and keep separate so that events can be easily segregated and reviewed.
I agree with the concept of cloning (and see the use cases for it, precisely because of having different motion triggers and DeepStack settings), but I have yet to sort it out from a usability standpoint if I am not the only one reviewing the alerts. I'm referring to the WAF, LOL. No matter what I have tried, the "group clone clips" setting never seemed to help much there. Any step by step tips you care to share that I might have missed?
 
Last night I tried to see how a model will train if I combine (bicycle, bus, car, motorcycle, & truck) into one label "vehicle". Results we good, attached is general V3.0, this version does not include the dark image. I will include the dark images in V3.1.

View attachment 130440View attachment 130441
I added general 3.0 but I'm still getting car and truck .. should I remove them and only have vehicle in txt box?