What is denormalization?
Database denormalization is a performance-tuning technique that is used to improve query speed by sacrificing some data integrity.
Denormalization allows for duplicating data in multiple places, which can make updates more difficult but can improve query speed.
Denormalization is typically used when query speed is more important than data integrity.
Most often, this term refers to a strategy applied to an already normalized database in order to improve performance. The goal of this is to place redundant data where it will be most beneficial.
There are a few ways you can go about this: adding fields to existing tables, creating new tables, or duplicating existing tables.
The logic is to reduce the execution time of certain queries by simplifying data access or by creating tables with the results of reports built on the basis of the original data.
A normalized database is essential for denormalization to occur. It is crucial to understand the distinction between a database that has not been normalized at all and a normalized database that has been denormalized.
In the second case, there are no problems. However, the first case may indicate design errors or a lack of knowledge among the specialists who were involved.
When is it useful to use denormalization?
- Saving historical data
- Improving Query Performance
- Speed up reporting
- Preliminary calculations of frequently requested values
The disadvantages of database denormalization
There are some disadvantages to database denormalization that should be considered before implementation.
Drawbacks of database denormalization can include:
- Requiring more disk space
- Introducing data anomalies
- Making documentation more difficult
- Slowing down other operations
- Creating more code complexity