This guide helps you migrate full Home Assistant history from the default home-assistant_v2.db
(SQLite) to a PostgreSQL 17 instance deployed via TrueNAS SCALE Apps.
It supports:
- β
statistics_meta
,statistics
,statistics_short_term
- β
recorder_runs
,schema_changes
- β
event_types
,states_meta
,event_data
,state_attributes
,events
,states
With full support for foreign keys, batching, and progress logging.
NOTE this guide has been written after the whole process was completed so there may be the need for some slight adjustment.
The process was done in a TrueNAS SCALE environment but it's not strictly related to this particular OS. Effectively this guide should work for any case where you want to migrate a Home Assistant sqlite database to a PostgreSQL one.
This procedure runs inside a python docker container to minimize the time needed to setup avoiding python or libraries version conflicts and cluttering on the host system.
- PostgreSQL 17 installed (or, in my specific case, Home Assistant app installed from TrueNAS SCALE Apps)
- The PostgreSQL credentials (database name, username, password)
- Your SQLite file (
home-assistant_v2.db
) is accessible, e.g./mnt/data/homeassistant/config/
docker run -it --rm --network <network_name_of_postgres> -v /mnt/data/homeassistant/config:/config python:3.12 bash
Replace <network_name_of_postgres>
with the correct Docker network (e.g. ix-home-assistant_default
).
Inside the container:
pip install psycopg2-binary
Create the script:
cat > /migrate_all.py << 'EOF'
import sqlite3
import psycopg2
from psycopg2.extras import execute_values
SQLITE_PATH = "/config/home-assistant_v2.db"
PG_CONFIG = {
"dbname": "<your-db-name>",
"user": "<your-db-user>",
"password": "<your-db-password>",
"host": "<your-postgres-host>",
"port": "5432"
}
BATCH_SIZE = 5000
ALL_TABLES = [
"states",
"events",
"state_attributes",
"event_data",
"states_meta",
"event_types",
"schema_changes",
"recorder_runs",
"statistics_short_term",
"statistics",
"statistics_meta"
]
def clean_database(pg_conn):
print("π§Ή Cleaning all tables in PostgreSQL...")
with pg_conn.cursor() as cur:
for table in ALL_TABLES:
try:
cur.execute(f"DELETE FROM {table}")
print(f" β¨― {table} cleared")
except Exception as e:
print(f" β οΈ Error cleaning {table}: {e}")
pg_conn.commit()
print("β
Cleanup complete.\n")
def migrate_table(table_name, sqlite_cursor, pg_conn):
print(f"\nπ¦ Starting migration of: {table_name}")
try:
sqlite_cursor.execute(f"SELECT * FROM {table_name}")
except Exception as e:
print(f"β Error accessing {table_name} in SQLite: {e}")
return
column_names = [desc[0] for desc in sqlite_cursor.description]
columns = ", ".join([f'"{col}"' if col.lower() in ('end',) else col for col in column_names])
count = 0
while True:
rows = sqlite_cursor.fetchmany(BATCH_SIZE)
if not rows:
break
with pg_conn.cursor() as pg_cursor:
try:
insert_sql = f"INSERT INTO {table_name} ({columns}) VALUES %s ON CONFLICT DO NOTHING"
execute_values(pg_cursor, insert_sql, rows)
pg_conn.commit()
count += len(rows)
print(f" β {count} rows migrated...")
except Exception as e:
pg_conn.rollback()
print(f"β Error in batch ({count} processed): {e}")
break
print(f"β
Finished migration of {table_name} ({count} rows)")
def main():
sqlite_conn = sqlite3.connect(SQLITE_PATH)
sqlite_cursor = sqlite_conn.cursor()
pg_conn = psycopg2.connect(**PG_CONFIG)
clean_database(pg_conn)
# Order for FK constraints
ordered_tables = [
"event_types",
"states_meta",
"event_data",
"state_attributes",
"events",
"states",
"schema_changes",
"recorder_runs",
"statistics_meta",
"statistics",
"statistics_short_term"
]
for table in ordered_tables:
migrate_table(table, sqlite_cursor, pg_conn)
sqlite_conn.close()
pg_conn.close()
print("\nπ All tables migrated successfully.")
if __name__ == "__main__":
main()
EOF
Replace the PG_CONFIG
values with your actual credentials.
python /migrate_all.py
Youβll see live progress for each table every 5000 rows.
Youβve successfully migrated full Home Assistant history:
- β Long-term statistics
- β Full state history and events
- β Valid foreign key references
- β Optimized for PostgreSQL
Test it via Developer Tools β History or the "Statistics" graphs.
After a successful migration, you may notice that the Energy Dashboard doesn't show any new data while entity graphs work correctly. This is likely because:
- Home Assistant uses derived statistics in the
statistics
andstatistics_short_term
tables to populate the Energy dashboard. - If the recorder is working and values are available for your energy-related sensors, but statistics have stopped after the migration, the derived statistics may need to be recalculated.
- No new data in Settings β Statistics
- Energy dashboard shows old values or gaps
- Home Assistant logs show:
Error when validating DB schema: duplicate key value violates unique constraint "statistics_pkey"
A duplicate id
in the statistics
table caused the recorder to halt statistics processing.
- Open the PostgreSQL shell and run:
SELECT MAX(start_ts), COUNT(*) FROM statistics;
Check Home Assistant logs for "duplicate key"
errors in statistics_pkey
.
-
If the latest timestamp is stuck at the day of migration, it indicates that statistics aren't being updated.
-
From Home Assistant Developer Tools β Statistics, check if stats are missing after that date.
To fix it:
- Fix the auto increment sequence executing:
SELECT setval('statistics_id_seq', (SELECT MAX(id) FROM statistics));
-
Restart Home Assistant.
-
Home Assistant will start to compute statistics again.
β³ It may take up to an hour before data reappears in the Energy dashboard.
- You can check with the same SQL query (
MAX(start_ts)
) to ensure stats are now fresh. - Also verify the Energy dashboard now shows recent energy usage and production data.
ChatGPT :D
Tested on:
- Home Assistant Core (Docker)
- PostgreSQL 17 (TrueNAS SCALE App)
- Python 3.12 +
psycopg2-binary
Feel free to share this Gist or extend it