Full ALPR Database System for Blue Iris!

I've done this:
Bash:
root@alpr:/opt/ALPR_Database# docker compose down
[+] Running 3/3
 ✔ Container alpr_database-app-1  Removed                                                                                      1.2s
 ✔ Container alpr_database-db-1   Removed                                                                                     11.2s
 ✔ Network alpr_database_default  Removed                                                                                      0.3s
root@alpr:/opt/ALPR_Database# docker compose pull
[+] Pulling 2/2
 ✔ db Pulled                                                                                                                   0.8s
 ✔ app Pulled                                                                                                                  0.8s
root@alpr:/opt/ALPR_Database# docker compose up -d
[+] Running 3/3
 ✔ Network alpr_database_default  Created                                                                                      0.1s
 ✔ Container alpr_database-db-1   Started                                                                                      1.8s
 ✔ Container alpr_database-app-1  Started                                                                                      2.4s
root@alpr:/opt/ALPR_Database#
md5:
root@alpr:/opt/ALPR_Database# md5sum *.sql
68299a0cc953f95cf27525b3c3023d41 migrations.sql
9caab0fb987b904a13f3e325f5fa5fbe schema.sql


And still have the No Stream 500 response. Not one plate has be successful in posting to the app. Is it possible this is because I'm a fresh install and not an update? Some issue that hasn't been caught with most people doing updates? If not, I've done two fresh installs, one on my Windows BI machine and this one on a separate linux VM, both showing the same error which points me to BI. But I have no idea where to look. I've reduced the recorded images to as small and poor as they go and still see the error. Any ideas of other places to look would be great as I really want this to work.

Cheers.
Can you try running the migration manually like @hopalong did?

Also, what do you see in your ALPR DB log (second to last icon)? Are you still seeing database errors?
 
I've done this:
Bash:
root@alpr:/opt/ALPR_Database# docker compose down
[+] Running 3/3
 ✔ Container alpr_database-app-1  Removed         [URL='https://github.com/algertc/ALPR-Database']GitHub Repository[/URL]                                                                             1.2s
 ✔ Container alpr_database-db-1   Removed                                                                                     11.2s
 ✔ Network alpr_database_default  Removed                                                                                      0.3s
root@alpr:/opt/ALPR_Database# docker compose pull
[+] Pulling 2/2
 ✔ db Pulled                                                                                                                   0.8s
 ✔ app Pulled                                                                                                                  0.8s
root@alpr:/opt/ALPR_Database# docker compose up -d
[+] Running 3/3
 ✔ Network alpr_database_default  Created                                                                                      0.1s
 ✔ Container alpr_database-db-1   Started                                                                                      1.8s
 ✔ Container alpr_database-app-1  Started                                                                                      2.4s
root@alpr:/opt/ALPR_Database#
md5:
root@alpr:/opt/ALPR_Database# md5sum *.sql
68299a0cc953f95cf27525b3c3023d41 migrations.sql
9caab0fb987b904a13f3e325f5fa5fbe schema.sql


And still have the No Stream 500 response. Not one plate has be successful in posting to the app. Is it possible this is because I'm a fresh install and not an update? Some issue that hasn't been caught with most people doing updates? If not, I've done two fresh installs, one on my Windows BI machine and this one on a separate linux VM, both showing the same error which points me to BI. But I have no idea where to look. I've reduced the recorded images to as small and poor as they go and still see the error. Any ideas of other places to look would be great as I really want this to work.

Cheers.

No, there is no difference between the fresh install and the update. If you see anything in the application’s logs about a “relation does not exist” or similar, that is the problem. ---- Your migrations.sql and schema.sql did not download correctly

It is extremely strange that you’re experiencing this when using the install scripts for both Linux and windows. I, and many others, have tested the scripts thoroughly, and they work well.


Fundamentally, there are two parts to the system:
the nextjs web app and the postgresql database. I’m going to guess that your issue is involving the database. If you happen to be using unraid, please look at the previous comments. Nonetheless, it should have worked when you tried it on windows. I don’t really see any potential for user error running the scripts, so it is possible that you have found some extremely obscure bug.

Anyways, the web app is practically unbreakable, as it is entirely contained within the docker image, if you are getting no stream 500, that means there is a server error. Check the logs. If it is in fact a database issue, you need to ensure that the schema.sql and migrations.sql files have the proper contents.



I’m sorry this has happened to you, and I wish I understood what caused it - even more robust scripts will be next to come with @VideoDad s hashing suggestion, but it should work as is. The fact that you have had trouble on both windows and Linux is very strange. You should delete both the .sql files and download them from the GitHub Repository and place them back in that directory. You may also need to delete the volume in docker desktop for your database.

The whole point of using docker is to have this be highly standardized and work easily across a range of different computers. That is both my goal for this project, and Docker’s intention overall: “just works.”


I’ve done my best to make the scripts ensure the integrity of the files, but it seems that I may have missed something, as you are the second user (albeit out of 100+) to report difficulties.


If you see anything in the logs that seem related to the database or SQL, like a relation missing, or a column missing, that is the issue 100%. Look no further. Your schema and migrations are not formatted correctly. There is no other cause. That is 100% the problem and nothing else. Period. The logs are there to help you. If they say something is missing, that is the cause.

I don’t know how that happens or how to recreate it, but for your purposes, you should manually copy the files, and ensure they are the exact same as the files in the GitHub repository. If you do have any idea how it happened in the first place, please do share… I have a feeling that implementing a hash check will solve this issue for most, although I still don’t know what system configuration causes it.



I’m not sure if it was you who said this, but you absolutely do NOT need to delete the container every time.



If you do manage to pinpoint the cause of your install issue, please do share so we can document for posterity.
 
Last edited:
The OCR works the same for all states/countries, including those with multiple line number plates. @MikeLud1 has taken care to ensure this works, and I have tried to continue that in the AI training export.

As far as mac addresses go, you're talking about BLE or WiFi. It's not so much a matter of make/model, as nearly all cars nowadays have Bluetooth, nor a matter of their randomness, as even law enforcement can't really pinpoint a vehicle by its mac address. It's more of a BLE/WiFi protocol challenge, and staying within the bounds of the FCC regulations. The goal with the TPMS, for example, is not to be able to provide the TPMS ID to law enforcement, but to be able to associate the TPMS ID to a license plate that you can then use. The same would be true for any other sort of uniquely identifiable RF.

We will experiment with WiFi/Bluetooth after we have a verdict on the usability of the TPMS system, as that is our most promising form of PII.
BLE MAC. Not all vehicles will broadcast it when out of pairing mode, and a select number may randomize them, but they can be tracked. Although not sure on FCC regulations, I know that it is tracked in AU and some LE use it.
 
Can you try running the migration manually like @hopalong did?

Also, what do you see in your ALPR DB log (second to last icon)? Are you still seeing database errors?

This is a good point. If we can figure out why Unraid complains, I can add more specific errors that will make things easier to debug.
 
BLE MAC. Not all vehicles will broadcast it when out of pairing mode, and a select number may randomize them, but they can be tracked. Although not sure on FCC regulations, I know that it is tracked in AU and some LE use it.


They usually don't change that often, but that isn't the main issue. As you mention, it is the fact that they have to broadcast in pairing mode. We have raised other angles such as generic public/or common SSIDs. None work really well, but TPMS is not a fool proof solution either.


I'm definitely open to exploring other options afterwards, and supporting external data ingestion (for things that may not be a good idea to openly document in the code / be legal) (If anyone has 5-10 grand burning a hole in their pocket, maybe consider buying a 5G transceiver to spoof base stations. Then you will really get great results HAHA......... Though it is not legal in the USA).



If anyone actually does want to buy one of the cellular-grade SDRs, I will add the capability to accept data from it in a heartbeat. I would be fascinated to see what happens. Beware of posting anything, whether it be concrete evidence, or as innocent as intentions or public forums like this though. The FCC is MEAN and I happen to know from a family friend that there is a shockingly large number of employees whose job it is to read forums looking for things exactly like this... I doubt IPCT is a commonly monitored place for them, but the internet is forever - always use caution. I would absolutely spoof 5G if I could afford one of those, but I certainly wouldn't admit to it outside of PM.




If you happen to come into possession of this sort of data, maybe you deployed a magical POE lawn gnome that has divine powers to log such signals without breaking any laws....







We can certainly add divine gnome support to the app, but once again, we definitely don't do anything illegal here...


1742912570384.png



As it is now clear that you know @Columbo , BLE/WiFi is significantly less reliable than analog TPMS IDs. They are definitely in the pipeline for the future down the road, but I'm trying to take it one step at a time and sort of triage the situation.
 
Last edited:
Actually, I think that trying to intercept toll road/bridge RFID units would probably be the next best thing after TPMS. In Calfiornia there is a state code that standardized the protocol that the toll transponders had to communicate on. There was a Defcon presentation on it many years ago. Of course, not all vehicles has transponders (I don't have one and just get invoiced by plate), but again, another usable backup form of PII, and 100% reliable to activate and log.

In California, we have a new transponder as of recent (∼3-4yrs ago) that requires you to select the number of passengers in the vehicle for HOV lanes, but it seems that they are still required to be backwards compatible with the old, more rudimentary, system. I don't think activating and logging these would actually be breaking any laws either.... Might have to try......... It is slightly expensive though.
 
They usually don't change that often, but that isn't the main issue. As you mention, it is the fact that they have to broadcast in pairing mode. We have mentioned other angles such as generic public/or common SSIDs. None work really well, but TPMS is not a fool proof solution either.


I'm definitely open to exploring other options afterwards, and supporting external data ingestion (for things that may not be a good idea to openly document in the code / be legal) (If anyone has 5-10 grand burning a hole in their pocket, maybe consider buying a 5G transceiver to spoof base stations. Then you will really get great results HAHA......... Though it is not legal in the USA).


As it is now clear you know, this is significantly less reliable than analog TPMS IDs. Definitely in the pipeline, but I'm trying to take it one step at a time and sort of triage the situation.
Fair enough. I might DM you a few things in a week when I'm not travelling and I am back at a computer. I know its a hit/miss with it, but my thought was that it is already a common thing sniffed by governments so lots of data already exists (but not that helpful to you or me).

Anyhow, dont want to distract you too much from this, your doing gods work :)
 
Fair enough. I might DM you a few things in a week when I'm not travelling and I am back at a computer. I know its a hit/miss with it, but my thought was that it is already a common thing sniffed by governments so lots of data already exists (but not that helpful to you or me).

Please do if you have some sort of unique angle on it. I'd be very interested to hear. A mac address really only useful to us if we can associate it to a license plate with the data that we collect from our own systems though. A mac address on its own won't do much at all.


While I don't doubt that the NSA probably has records of every car MAC address ever LOL, in the USA, antennae on the roads to track this type of thing are not common, if they exist at all. They might in some places, but I'm almost entirely certain that even the Donald himself has no way to track vehicle MACs without using some sort of corrupt secret conspiratorial backdoored home routers or subpoenaing all the ISPs.

Anyhow, dont want to distract you too much from this, your doing gods work :)

Not distracting at all. It's all part of the ecosystem. Glad you're finding it useful :)
 
Last edited:
Out of curiosity, is there a way to see the statistics on number of installs, record counts, earliest read, etc.?
 
No, there is no difference between the fresh install and the update. If you see anything in the application’s logs about a “relation does not exist” or similar, that is the problem. ---- Your migrations.sql and schema.sql did not download correctly

It is extremely strange that you’re experiencing this when using the install scripts for both Linux and windows. I, and many others, have tested the scripts thoroughly, and they work well.


Fundamentally, there are two parts to the system:
the nextjs web app and the postgresql database. I’m going to guess that your issue is involving the database. If you happen to be using unraid, please look at the previous comments. Nonetheless, it should have worked when you tried it on windows. I don’t really see any potential for user error running the scripts, so it is possible that you have found some extremely obscure bug.

Anyways, the web app is practically unbreakable, as it is entirely contained within the docker image, if you are getting no stream 500, that means there is a server error. Check the logs. If it is in fact a database issue, you need to ensure that the schema.sql and migrations.sql files have the proper contents.



I’m sorry this has happened to you, and I wish I understood what caused it - even more robust scripts will be next to come with @VideoDad s hashing suggestion, but it should work as is. The fact that you have had trouble on both windows and Linux is very strange. You should delete both the .sql files and download them from the GitHub Repository and place them back in that directory. You may also need to delete the volume in docker desktop for your database.

The whole point of using docker is to have this be highly standardized and work easily across a range of different computers. That is both my goal for this project, and Docker’s intention overall: “just works.”


I’ve done my best to make the scripts ensure the integrity of the files, but it seems that I may have missed something, as you are the second user (albeit out of 100+) to report difficulties.


If you see anything in the logs that seem related to the database or SQL, like a relation missing, or a column missing, that is the issue 100%. Look no further. Your schema and migrations are not formatted correctly. There is no other cause. That is 100% the problem and nothing else. Period. The logs are there to help you. If they say something is missing, that is the cause.

I don’t know how that happens or how to recreate it, but for your purposes, you should manually copy the files, and ensure they are the exact same as the files in the GitHub repository. If you do have any idea how it happened in the first place, please do share… I have a feeling that implementing a hash check will solve this issue for most, although I still don’t know what system configuration causes it.



I’m not sure if it was you who said this, but you absolutely do NOT need to delete the container every time.



If you do manage to pinpoint the cause of your install issue, please do share so we can document for posterity.
I downloaded directly from GitHub and compared md5 between the new and existing files, no difference.
Code:
root@alpr:/opt/ALPR_Database# md5sum new/*
68299a0cc953f95cf27525b3c3023d41  new/migrations.sql
9caab0fb987b904a13f3e325f5fa5fbe  new/schema.sql
root@alpr:/opt/ALPR_Database# md5sum *.sql
68299a0cc953f95cf27525b3c3023d41  migrations.sql
9caab0fb987b904a13f3e325f5fa5fbe  schema.sql

The error does seem DB related:

3/25/2025, 11:32:31 AM [INFO] POST /api/plate-reads
3/25/2025, 11:32:31 AM [INFO] Received plate read data: [object Object]
3/25/2025, 11:32:31 AM [INFO] Database connection established
3/25/2025, 11:32:31 AM [ERROR] Error processing request: error: there is no unique or exclusion constraint matching the ON CONFLICT specification
 
Out of curiosity, is there a way to see the statistics on number of installs, record counts, earliest read, etc.?

I can see these in Cloudflare. I could make it public and considered making some basic endopoints to get that sort of data for the little widgets/shields at the top of the GitHub readme, but I don’t really think it’s worth the time to do that right now.


- Total docker pulls can be seen on the docker hub page. 2.4k currently. (I believe many of these are on <V1.6 as the number has not changed much since the inception of the project, and I don’t see any activity from those users in the cloud)

- Deployments opted into metrics is 129.

- Annotated AI training images submitted is about 250k. Roughly 10% of them are validated. (dated: 5 days into the training update)

- Somewhere in the neighborhood of 8 million total plate recognitions through all deployments worldwide since November.

- Users in about 1/3 of all US states, with the highest concentrations being in California, Massachusetts, New York, and Utah - In that order.


I don’t collect any data on known plates, tags, precise location, usernames, or any other system or individual plate metrics for individual deployments, so I know nothing about that.


I don’t think this will bother anyone who opted to participate in training, but in the interest of transparency, I will share that I certainly could cross reference and pinpoint locations or deanonymize users if I were to want to.

Not sure what value that would ever offer me, and I can tell you all that I have no reason to do that. But if you’re worried about the little Californian Charlie being able to do that, you have the ability to turn it off, as it is by default.





Lastly, the code itself it MIT licensed, meaning it can’t be sold. The submitted data (AI Training) isn’t technically covered under that license, and could technically be sold, but I’m quite strongly opposed to doing anything like that. My finance professor Dad of course was frothing at the mouth at the idea of having millions of CV JPEGS to sell LOL, but personally, I would be kind of pissed off if I were on the other side of that and pictures of my street ended up in some public data set on the internet. And even if it were private, despite the fact that it wouldn’t be violating the license and I fully could do it, the images are not mine.

Selling them would be pretty contrary to the spirit in which they were collected. The product of the data becomes public anyways through codeproject, and that is how I, and I believe those who submitted them, intended it to be.

Obviously somebody has to store them, so idk, maybe I’ll use them for something else at some point if needed, but I would not ever just give the data set away to someone, just to be 100% clear.



If this were a paid app/service, my view would be entirely different (and if anyone cares, my opinion on most things like this is the complete oppsosite), but I made this for this community in very deliberate intention to make BI better with something that I needed and that I am very content to see that other needed as well. I have precisely zero interest in capitalizing on whatever this jumbo data set would be worth. I have no need to do so, and I’d feel like a prime POS to take the data and sell it.


To be fair, none of you would ever know if I did, but I write this to explain my stance and intention and ask you to just take my word for it. I can't really prove much beyond that. And if you don’t want to, that’s why I added an off switch :)
 
Last edited:
Yes, we are hoping that the users validate and correct any false reads

If the OCR read is not validate we will have to manual validate the data before using it for training

I want to improve both by retrain the models with the data we receive
Mike,

Is the dataset that you used to create the LPR model something you can share or direct me to?

I'm playing with the Coral TPU and eventual want to see if I can't make a model for the that and maybe a custom LPR module for my situation.

Thanks
 
  • Like
Reactions: wpiman
I downloaded directly from GitHub and compared md5 between the new and existing files, no difference.
Code:
root@alpr:/opt/ALPR_Database# md5sum new/*
68299a0cc953f95cf27525b3c3023d41  new/migrations.sql
9caab0fb987b904a13f3e325f5fa5fbe  new/schema.sql
root@alpr:/opt/ALPR_Database# md5sum *.sql
68299a0cc953f95cf27525b3c3023d41  migrations.sql
9caab0fb987b904a13f3e325f5fa5fbe  schema.sql

The error does seem DB related:

3/25/2025, 11:32:31 AM [INFO] POST /api/plate-reads
3/25/2025, 11:32:31 AM [INFO] Received plate read data: [object Object]
3/25/2025, 11:32:31 AM [INFO] Database connection established
3/25/2025, 11:32:31 AM [ERROR] Error processing request: error: there is no unique or exclusion constraint matching the ON CONFLICT specification

This is sort of a long shot, but can you try replacing your migrations.sql file with this: ALPR-Database/migrations.sql at 962103071ce468cd3282b740a6a8b5648ec85555 · algertc/ALPR-Database

And see what happens? This is the version of the file before the update. It is possible that people updating already had something applied to their database schema that let some sort of issue with the new version fly under the radar. I'll test with a fully fresh version later today.

I'd recommend replacing with that version of the migrations.sql file, restarting the containers with docker compose down , docker compose up -d . Then, docker compose down again, replace the migrations.sql file again with the newest version in the repo (the normal link/what you find if you just navigate to it from the home page), then docker compose up -d again.


Doing this will see if there was something that got lost in translation during the update which might go unnoticed by existing users. This should also replicate for you the exact same environment that the other existing users have. So, if the issue still persists, we know it is caused by something else and can look into why the database is complaining for you.
 
This is sort of a long shot, but can you try replacing your migrations.sql file with this: ALPR-Database/migrations.sql at 962103071ce468cd3282b740a6a8b5648ec85555 · algertc/ALPR-Database

And see what happens? This is the version of the file before the update. It is possible that people updating already had something applied to their database schema that let some sort of issue with the new version fly under the radar. I'll test with a fully fresh version later today.

I'd recommend replacing with that version of the migrations.sql file, restarting the containers with docker compose down , docker compose up -d . Then, docker compose down again, replace the migrations.sql file again with the newest version in the repo (the normal link/what you find if you just navigate to it from the home page), then docker compose up -d again.


Doing this will see if there was something that got lost in translation during the update which might go unnoticed by existing users. This should also replicate for you the exact same environment that the other existing users have. So, if the issue still persists, we know it is caused by something else and can look into why the database is complaining for you.
Completed. Error messages stay the same, complaining about the ON CONFLICT statements; of which there are only 2. I read some of the docs and it seems to indicate there is nothing unique to indicate a single row - but I'm certainly no DBA or developer.
 
@svalvasori Can you connecting to your sql container, and then issuing this command:

psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql

On my test system, I was seeing some errors that look sort of similar to yours:

app-1 | Database connection established
app-1 | [FileStorage] Successfully saved image
db-1 | 2025-03-25 18:26:54.744 GMT [11787] ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
db-1 | 2025-03-25 18:26:54.744 GMT [11787] STATEMENT: WITH new_plate AS (
db-1 | INSERT INTO plates (plate_number)
db-1 | VALUES ($1)
app-1 | Error processing request: error: there is no unique or exclusion constraint matching the ON CONFLICT specification
db-1 | ON CONFLICT (plate_number) DO NOTHING
app-1 | at <unknown> (/app/node_modules/pg/lib/client.js:535:17)
db-1 | ),
app-1 | at async p (/app/.next/server/app/api/plate-reads/route.js:1:6064)

My test was creating a new tag, if it fails to do that, then I know the database is having some issues. Anyway, the above fixed it for me.
 
  • Like
Reactions: svalvasori
@svalvasori Can you connecting to your sql container, and then issuing this command:

psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql

On my test system, I was seeing some errors that look sort of similar to yours:

app-1 | Database connection established
app-1 | [FileStorage] Successfully saved image
db-1 | 2025-03-25 18:26:54.744 GMT [11787] ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
db-1 | 2025-03-25 18:26:54.744 GMT [11787] STATEMENT: WITH new_plate AS (
db-1 | INSERT INTO plates (plate_number)
db-1 | VALUES ($1)
app-1 | Error processing request: error: there is no unique or exclusion constraint matching the ON CONFLICT specification
db-1 | ON CONFLICT (plate_number) DO NOTHING
app-1 | at <unknown> (/app/node_modules/pg/lib/client.js:535:17)
db-1 | ),
app-1 | at async p (/app/.next/server/app/api/plate-reads/route.js:1:6064)

My test was creating a new tag, if it fails to do that, then I know the database is having some issues. Anyway, the above fixed it for me.
Help me understand where to run the psql command. I'm thinking you mean inside the alpr_database-db-1 container. I tried that with docker exec without luck, so I'm going to need additional help with that.
 
Help me understand where to run the psql command. I'm thinking you mean inside the alpr_database-db-1 container. I tried that with docker exec without luck, so I'm going to need additional help with that.
Scratch that. Apparently that did some good since I have my first few plates in. I thought most of this was a fail.
Code:
root@alpr:/opt/ALPR_Database# docker exec -it alpr_database-db-1 psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql
SET
SET
SET
SET
SET
 set_config
------------

(1 row)

SET
SET
SET
SET
CREATE EXTENSION
COMMENT
CREATE EXTENSION
COMMENT
psql:/docker-entrypoint-initdb.d/schema.sql:58: ERROR:  function "update_updated_at_column" already exists with same argument types
ALTER FUNCTION
SET
SET
psql:/docker-entrypoint-initdb.d/schema.sql:77: ERROR:  relation "known_plates" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:93: ERROR:  relation "plate_notifications" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:108: ERROR:  relation "plate_notifications_id_seq" already exists
ALTER TABLE
ALTER SEQUENCE
psql:/docker-entrypoint-initdb.d/schema.sql:140: ERROR:  relation "plate_reads" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:155: ERROR:  relation "plate_reads_id_seq" already exists
ALTER TABLE
ALTER SEQUENCE
psql:/docker-entrypoint-initdb.d/schema.sql:175: ERROR:  relation "plate_tags" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:190: ERROR:  relation "plates" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:219: ERROR:  function "update_plate_occurrence_count" already exists with same argument types
ALTER FUNCTION
psql:/docker-entrypoint-initdb.d/schema.sql:224: ERROR:  trigger "plate_reads_count_trigger" for relation "plate_reads" already exists
psql:/docker-entrypoint-initdb.d/schema.sql:227: ERROR:  relation "idx_plates_occurrence_count" already exists
psql:/docker-entrypoint-initdb.d/schema.sql:238: ERROR:  relation "tags" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:247: ERROR:  relation "devmgmt" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:253: ERROR:  relation "devmgmt" does not exist
LINE 1: INSERT INTO devmgmt (id, update1)
                    ^
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE


Any idea's exactly why that fixed it? Do I need to run that after each update? And thank you!
 
Help me understand where to run the psql command. I'm thinking you mean inside the alpr_database-db-1 container. I tried that with docker exec without luck, so I'm going to need additional help with that.

docker ps -a

find the container ID of your postgres

docker exec -it [container id] /bin/bash

psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql
 
Scratch that. Apparently that did some good since I have my first few plates in. I thought most of this was a fail.
Code:
root@alpr:/opt/ALPR_Database# docker exec -it alpr_database-db-1 psql -d postgres -U postgres -f /docker-entrypoint-initdb.d/schema.sql
SET
SET
SET
SET
SET
 set_config
------------

(1 row)

SET
SET
SET
SET
CREATE EXTENSION
COMMENT
CREATE EXTENSION
COMMENT
psql:/docker-entrypoint-initdb.d/schema.sql:58: ERROR:  function "update_updated_at_column" already exists with same argument types
ALTER FUNCTION
SET
SET
psql:/docker-entrypoint-initdb.d/schema.sql:77: ERROR:  relation "known_plates" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:93: ERROR:  relation "plate_notifications" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:108: ERROR:  relation "plate_notifications_id_seq" already exists
ALTER TABLE
ALTER SEQUENCE
psql:/docker-entrypoint-initdb.d/schema.sql:140: ERROR:  relation "plate_reads" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:155: ERROR:  relation "plate_reads_id_seq" already exists
ALTER TABLE
ALTER SEQUENCE
psql:/docker-entrypoint-initdb.d/schema.sql:175: ERROR:  relation "plate_tags" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:190: ERROR:  relation "plates" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:219: ERROR:  function "update_plate_occurrence_count" already exists with same argument types
ALTER FUNCTION
psql:/docker-entrypoint-initdb.d/schema.sql:224: ERROR:  trigger "plate_reads_count_trigger" for relation "plate_reads" already exists
psql:/docker-entrypoint-initdb.d/schema.sql:227: ERROR:  relation "idx_plates_occurrence_count" already exists
psql:/docker-entrypoint-initdb.d/schema.sql:238: ERROR:  relation "tags" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:247: ERROR:  relation "devmgmt" already exists
ALTER TABLE
psql:/docker-entrypoint-initdb.d/schema.sql:253: ERROR:  relation "devmgmt" does not exist
LINE 1: INSERT INTO devmgmt (id, update1)
                    ^
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE


Any idea's exactly why that fixed it? Do I need to run that after each update? And thank you!

For whatever reason, the database wasn't build properly for you. I am not sure what causes this, but I ran into this when I was building the docker image from source and it wasn't working with similar errors as you each time it tries to write something to the database. I would get the images and it'd be saved properly to disk, but would see a bunch of errors when it tries to update the database.