Slave webcam chat - Sql updating large number of rows
Reality: If you are not modifying the keys in an index or making the column size wider, UPDATE statements can be executed as in-place modifications of the row.This allows the database to only write a special old/new value into the transaction log.This typically generates at least an order of magnitude fewer log entries than row logged, and is thus faster… UPDATE and DELETE statements tend to be fully row logged.
Second, the transaction log bottleneck ”wall” is widely exaggerated.
I have personally driven 750MB/sec write log traffic into a single database in SQL Server using a Fusion IO card.
I have seen colleagues do 120MB/sec with traditional, 15K spindles.
True: I have also driven 3GB/sec (around 10TB in an hour) using allocation logged INSERT, which is faster than its row logged sibling.
There seems to be a myth perpetuated out there in the database community that UPDATE statements are somehow “bad” and should be avoided in data warehouses.
Let us have a look at the facts for a moment and weigh up if this myth has any merit.Allocation Logged (called: “Minimal logging” in SQL Server) operations only write the physical allocations to the transaction log, if at all.This means the log traffic (if any) is proportional to the size of the data touched. Typically, ACID databases only allow bulk style loads, index builds and large table/partition truncations and drops to be allocation logged.Row logged operations will write a transaction log entry every time a row/tuple is modified.This means that the amount of transaction log traffic generated is proportional to the number of rows touched.First of all, the speed of UPDATE statement as compared to bulk inserts will depend on the database engine you run on.