What I did was comb through alerts and play the ones I'm interested in back. Then stop playback and use snipping tool to get a relatively small image. Obviously I save those to a specific directory. Using them in MakeSense is a breeze since the jpg is already relatively small to begin with. This brings up another question though.
When I look at the images used so far they all appear to be very high resolution photographs. The cameras I'm using are 2K. Daytime caps are quite shar, but night is another story. Between headlight bloom, streetlight glare and everything else they can be a little "ragged". Are these kinds of capture worthwhile?
I would assume that they are because they are "real use" conditions, but we all know what happens when you assume.
One more question. What object list should we be using before I start in again?
The cameras I'm using are 2K. Daytime caps are quite shar, but night is another story. Between headlight bloom, streetlight glare and everything else they can be a little "ragged". Are these kinds of capture worthwhile?
I just noticed this, but when trying the combined.pt I am getting this jpg on my clip list....normally it just shows Motion ? Any ideas ? Note the 2:02 clip it was not using the combined.pt
Got me looking in the right direction, I noticed the other cams using AI did not show this. I had forgotten to uncheck "save deepstack analysis details"
Thanks
After unchecking I can confirm its back to normal:
Those are both 5442T-ZE. IR is off and they rely on streetlight only. These are not primarily traffic cameras but are actually watching our lower front yard. In the process they get the street traffic. I use clones for tracking that and these caps are from those clones.
Those are both 5442T-ZE. IR is off and they rely on streetlight only. These are not primarily traffic cameras but are actually watching our lower front yard. In the process they get the street traffic. I use clones for tracking that and these caps are from those clones.
YEs, I've forced color at night and it's not enough unless gain, compensation and noise reduction are rather high which results in way too much blur. One camera is looking toward the street light, the other camera has the streetlight further away from it.
YEs, I've forced color at night and it's not enough unless gain, compensation and noise reduction are rather high which results in way too much blur. One camera is looking toward the street light, the other camera has the streetlight further away from it.
I'll give it another whirl but don't expect any really good results. I've tinkered and tinkered with settings and had them both in forced color for extended times without a lot of success. It is worth another try though.
Something I've noticed about the 5442 series. I have an early model 5442T-AS. When I say early model I mean the very first batch from Andy. It is a good camera, but compared to to other 5442T-AS I have it just isn't quite as good, even with the same firmware. I notice the T-ZE is quite as good, either, but that's to be expected with the different optics, slower F stop, of a varifocal.
Before and after trying to get color, forced color. You don't want to know where gain, saturation, brightness, contrast and compensation are at. I keep a full screen of the camera open in the BI console and make adjustments in IE while watching what's going on in the portion of the BI screen that's still visible.
OK so this would be my first time running a custom model. In the BI manual it says "By default, all Custom model files are considered, although you may select a subset here by specifying one or more (comma delimited; omit the file extensions). Custom models are enabled on the global Settings AI page. You may also add faces:0 and/or objects:0 here to specifically skip face and/or default object detection."
If I just wanted to run just the custom model do I need to do anything like whats bold above? Sorry for hijacking the thread.