Rolling restart
YDB clusters can be updated without downtime, which is possible because YDB normally has redundant components and supports rolling restart procedure. To ensure continuous data availability, YDB includes Cluster Management System (CMS) that tracks all outages and nodes taken offline for maintenance, such as restarts. CMS halts new maintenance requests if they might risk data availability.
However, even if data is always available, the restart of all nodes in a relatively short period of time might have a noticeable impact on overall performance. Each tablet running on a restarted node is relaunched on a different node. Moving a tablet between nodes takes time and may affect latencies of queries involving it. See recommendations for rolling restart.
Furthermore, a new YDB version may handle queries differently. While performance generally improves with each update, certain corner cases may occasionally end up with degraded performance. See recommendations for new version performance.
Diagnostics
Warning
Diagnostics of YDB rolling restarts and updates relies only on secondary symptoms. To be absolutely sure, contact your database administrator.
To check if the YDB cluster is currently being updated:
-
Open Embedded UI.
-
On the Nodes tab, see if YDB versions of the nodes differ.
Also, see if the nodes with the higher YDB version have the lower uptime value.
Alert
Low uptime value of a YDB node might also indicate other problems. For example, see Insufficient memory (RAM).
Recommendations
For rolling restart
If the ongoing YDB cluster rolling restart significantly impacts applications to the point where they can no longer meet their latency requirements, consider slowing down the restart process:
- If nodes are restarted in batches, reduce the batch size, up to one node at a time.
- Space out in time the restarts for each data center and/or server rack.
- Inject artificial pauses between restarts.
For new version performance
The goal is to detect any negative performance impacts from the new YDB version on specific queries in your particular workload as early as possible:
- Review the YDB server changelog for any performance-related notes relevant to your workload.
- Use a dedicated pre-production and/or testing YDB cluster to run a workload that closely mirrors your production workload. Always deploy the new YDB version to these clusters first. Monitor both client-side latencies and server-side metrics to identify any potential performance issues.
- Implement canary deployment by updating only one node initially to observe any changes in its behavior. If everything appears stable, gradually expand the update to more nodes, such as an entire server rack or data center, and repeat checks for anomalies. If any issues arise, immediately roll back to the previous version and attempt to reproduce the issue in a non-production environment.
Report any identified performance issues on YDB's GitHub. Provide context and all the details that could help reproduce it.