Optimizing Recovery Performance in SQL Server 2019
SQL Server 2019 has brought numerous features to improve its performance and disaster recovery capabilities. One such feature, the Automatic Seeding and Reseeding mechanism in Always On Availability Groups, relies heavily on the automatic database recovery process. However, this automated recovery mechanism requires a substantial amount of time to complete, particularly when dealing with large databases.
In this article, we will delve into the concept of preallocation using the ADR (Accelerated Database Recovery) preallocation factor and its impact on recovery performance in SQL Server 2019. We will explore how this factor works, how it is configured, and the advantages it offers in terms of optimization.
Understanding Automatic Database Recovery (ADR)
ADR is a feature in SQL Server 2019 that allows for faster recovery by persisting in-memory transaction log state changes. This offers significant benefits in terms of both failover time and the time it takes to recover a database in a disaster scenario.
As with any automated recovery mechanism, ADR requires careful planning to optimize its performance. One aspect that plays a crucial role in this optimization is the ADR preallocation factor.
A Fundamental Concept: The Preallocation Factor
The preallocation factor is a setting that governs the size of an in-memory log buffer that persists uncommitted versions of changed pages. This mechanism is also known as commit-depentent abort (CDA).
The preallocation factor can be set between 0 and 1000, which in turn, determines the log buffer size. A higher value will increase the log buffer size. The most suitable value depends on various factors, such as the availability group’s type and performance considerations.
What is the Preallocation Factor Used For?
The preallocation factor plays a crucial role in estimating how much space the persisted version store in the log portion of the database, which later become part of the database files, will occupy. This persisted version store basically serves to save time when recovery of changes occurs because SQL Server then only needs to read the extents in question or only modify the previously logged changes directly and only read just the extents where versions stored uncommitted versions.
Why Efficient Preallocation Factor Settings Matter
An efficiently configured preallocation factor brings several benefits, including:
- Faster database recovery: Well configured persistence of memory commits results in less data to be rolled forward.
- Improved performance: Having persisted the correct versions can gain you more performace advantages. Thereby you may have less write I/O with each version persisted.
- Lower failover time: Recovery of the database as a WHOLE happens potentially faster.
Configuring the Preallocation Factor
We can configure the preallocation factor by running the following script:
DECLARE @dbId INT = 1 -- Example database ID
, @percentage_factor INT = 100; -- Default value
EXEC sys.xtp_update_differential_file_list@database_id = @dbId,
@preallocation_factor = @percentage_factor;
For every database in the availability group, it’s required to run the above query since it can (obviously?) persist only one specific database per single call. You can automate the call on each affected database using common automation/scripting methods, such like for-each across database loop with T-SQL.
Measuring Preallocation Factor Performance
Evaluating the impact of the preallocation factor on recovery performance is essential for configuring it optimally. One approach is to measure how recover performance affects with this parameter while you running the database or alternatively running actual workload to use collected recovery time as sample.
For a proper understanding and thus choosing a more optimal factor setting follow these basic steps:
- Configure the desired preallocation factor value (e.g. 200).
- Restart the SQL Server service and complete Full backup with Transaction Log Backup.
- Re-run Full Backup and Transaction Log Backup.
- Record recovery time after restarting the SQL service.
When Determining the Optimal Value
Determining the optimal preallocation factor value may take several attempts. It could be effective to iteratively go about, continuously applying recovery and estimating the factor for efficiency in resource recovery.
Use this approach to trial multiple values beginning with incremental jumps (use even larger as appropriate, possibly as large as necessary to fulfill higher resource constraints for faster recovery performance optimization to compare changes).
Keep an eye on file size and check how the persistent logging handles changing values. If applicable, maintain multiple test data sets, record different factor settings, because both size of DB and system performance influence factor quality.
Finally, refer to the official Microsoft documentation for further insights into ADR performance and recovery.
Real-World Scenarios
The ADR preallocation factor setting improves SQL Server recovery in various real-world scenarios. Any application that interacts with SQL Server, such as your internal or external applications or services, benefits from this addition to your high availability and disaster recovery strategy.
It is crucial to invest time into this factor and tweak as needed within production. The exact percentage depends heavily on the system itself, its operation environment and operational load.
If you are interested in receiving further guidance or services related to SQL Server optimization and high availability, please feel free to reach out to PersonIT to schedule a consultation.