Postgresql updating millions of rows
Many a times, you come across a requirement to update a large table in SQL Server that has millions of rows (say more than 5 millions) in it.
Always use a WHERE clause to limit the data that is to be [email protected] Sinkinson I haven't actually gotten it to run to completion.I've canceled it after about 2.5 hours (or on the test server I was using when the Tlog used up all available space on the disk, which I've since remedied). Did you check for blocking or what the wait type was? I tried putting them in a table and doing it in batches of 100.(I had to select 100 id's from an ID table, delete where IN that list, delete from ids table the 100 I selected). Hard to gauge how long, since I can't see it's progress till done, but the query was still running after 2 days.The columns that are not on the list retain their original values.
I need to delete about 2 million rows from my PG database. 4 days later, this is still running with only 297268 rows deleted.
Suppose you want to update a column with the value 0, if it that column contains negative value.
Let us also assume that there are over 2 million row in that column that has a negative value.
You say that the UPDATE is running in a loop and was still running after 2.5 hours. Was it the loop itself that was taking a long time or a single UPDATE statement?
That is a somewhat important distinction and is currently ambiguous in the given information.
Is SQL Server smart enough to see that it doesn't need to update the other indexes (as they are not part of the UPDATE), or would we benefit from temporarily disabling the indexes while we do the update statements?