mouser.com was able to deliver my orders from november (dual TPU Version). I would recommend the usb version only for testing/development since it will add some latency and flaws on high load.
as far i can tell codeproject does only support pytorch => so no coral support for now?
mouser.com was able to deliver my orders from november (dual TPU Version). I would recommend the usb version only for testing/development since it will add some latency and flaws on high load.
as far i can tell codeproject does only support pytorch => so no coral support for now?
I wonder how much more performance it will be Vs the USB C, Only thing at-least for now , don't think my MOBO on the Current Lenovo Machine has an extra M.2 PCIe Slot ..
I wonder how much more performance it will be Vs the USB C, Only thing at-least for now , don't think my MOBO on the Current Lenovo Machine has an extra M.2 PCIe Slot ..
It's got 2 TPU's so at least twice. It maybe faster still if it can parallel process like dual channel memory.I don't know whether it does or doesn't so I'm going to assume 2x the performance. It's also going to be 2x the wattage of course but at 4 watts, it hardly breaks the bank!
It's got 2 TPU's so at least twice. It maybe faster still if it can parallel process like dual channel memory.I don't know whether it does or doesn't so I'm going to assume 2x the performance. It's also going to be 2x the wattage of course but at 4 watts, it hardly breaks the bank!
That would normally depend on the software and drivers, but very doubtful for the same process to access two independent devices unless some really cleaver stuff was implement at driver layer. What would be more likely is running two separate software instances (CP.AI or Deepstack on two different ports) with one configured to use one Coral and the 2nd on the other Coral, at the cost of double the RAM usage but not double the performance gain.
The driver provides access on the dedicated TPU chips. That means it is no problem to use 2 or more USB TPU´s or one or more of the dual TPU version. There are also some quite expensive m.2 e-key pcie adapter with an active cooling solution on the market to handle up to 16 TPU chips (e.g. from asus or blackbox). I guess for home users i would recommend the b-key version with a single TPU since you then can use quite cheap pcie m.2 adapters.
When I installed this with docker, it put me on the 2.0 beta. Does this version already have the @MikeLud1 custom models? I noticed in the directory of my docker it has:
So do all I need to do is put ipcam-combined.pt into my blue iris custom model folder?
When I installed this with docker, it put me on the 2.0 beta. Does this version already have the @MikeLud1 custom models? I noticed in the directory of my docker it has:
mouser.com was able to deliver my orders from november (dual TPU Version). I would recommend the usb version only for testing/development since it will add some latency and flaws on high load. as far i can tell codeproject does only support pytorch => so no coral support for now?
ipcamtalk.com
As far as I can tell, unless you are part of the inside Beta testing team, we are advised to stay on the 1.6.x releases.
I've been using 2.0-Beta since it was released over a month ago and this version has worked fine and without issue. So, there is nothing really wrong with that version from the object detection perspective. Maybe there are some other concerns (like the module install/uninstall functions don't seem to work) but nothing I use with BI. It is, after all, the docker container they have pushed as the one you would pull by default. If it was a problem, I assume they would have removed it weeks ago.
mouser.com was able to deliver my orders from november (dual TPU Version). I would recommend the usb version only for testing/development since it will add some latency and flaws on high load. as far i can tell codeproject does only support pytorch => so no coral support for now?
ipcamtalk.com
As far as I can tell, unless you are part of the inside Beta testing team, we are advised to stay on the 1.6.x releases.
Lol I saw that and well I guess you can call me a beta tester then because I'm definitely testing it.
I realized that they are included (I think), and then I also dropped them in a folder called custom-models on my desktop on my blue iris server and mapped it on blue iris AI tab.
Only issue I was having was blue iris was trying to use IPcam-combinded, and the codeproject couldn't find it because it's called ipcam-combined.pt on the docker. So I changed the name of it on the container, when I honestly should've just changed the name of it on my blue iris server. It's working now, but I'm getting AI times of 1000ms or so