Table
A table is a relational table containing a set of related data, composed of rows and columns. Tables represent entities. For instance, a blog article can be represented by a table named article with columns: id
, date_create
, title
, author
, body
and so on.
Rows in the table hold the data, while columns define the data types. For example, the id column cannot be empty (NOT NULL
) and should contain only unique integer values. A record in YQL might look like this:
CREATE TABLE article (
id Int64 NOT NULL,
date_create Date,
author String,
title String,
body String,
PRIMARY KEY (id)
)
Please note that currently, the NOT NULL
constraint can only be applied to columns that are part of the primary key.
YDB supports the creation of both row-oriented and column-oriented tables. The primary difference between them lies in their use-cases and how data is stored on the disk drive. In row-oriented tables, data is stored sequentially in the form of rows, while in column-oriented tables, data is stored in the form of columns. Each table type has its own specific purpose.
Row-oriented tables
Row-oriented tables are well-suited for transactional queries generated by Online Transaction Processing (OLTP) systems, such as weather service backends or online stores. Row-oriented tables offer efficient access to a large number of columns simultaneously. Lookups in row-oriented tables are optimized due to the utilization of indexes.
An index is a data structure that improves the speed of data retrieval operations based on one or several columns. It's analogous to an index in a book: instead of scanning every page of the book to find a specific chapter, you can refer to the index at the back of the book and quickly navigate to the desired page.
Searching using an index allows you to swiftly locate the required rows without scanning through all the data. For instance, if you have an index on the “author” column and you're looking for articles written by “Gray”, the DBMS leverages this index to quickly identify all rows associated with that surname.
You can create a row-oriented table through the YDB web interface, CLI, or SDK. Regardless of the method you choose to interact with YDB, it's important to keep in mind the following rule: the table must have at least one primary key column, and it's permissible to create a table consisting solely of primary key columns.
By default, when creating a row-oriented table, all columns are optional and can have NULL
values. This behavior can be modified by setting the NOT NULL
conditions for key columns that are part of the primary key. Primary keys are unique, and row-oriented tables are always sorted by this key. This means that point reads by the key, as well as range queries by key or key prefix, are efficiently executed (essentially using an index). It's permissible to create a table consisting solely of key columns. When choosing a key, it's crucial to be careful, so we recommend reviewing the article: "Choosing a Primary Key for Maximum Performance".
Partitioning row-oriented tables
A row-oriented database table can be partitioned by primary key value ranges. Each partition of the table is responsible for the specific section of primary keys. Key ranges served by different partitions do not overlap. Different table partitions can be served by different cluster nodes (including ones in different locations). Partitions can also move independently between servers to enable rebalancing or ensure partition operability if servers or network equipment goes offline.
If there is not a lot of data or load, the table may consist of a single shard. As the amount of data served by the shard or the load on the shard grows, YDB automatically splits this shard into two shards. The data is split by the median value of the primary key if the shard size exceeds the threshold. If partitioning by load is used, the shard first collects a sample of the requested keys (that can be read, written, and deleted) and, based on this sample, selects a key for partitioning to evenly distribute the load across new shards. So in the case of load-based partitioning, the size of new shards may significantly vary.
The size-based shard split threshold and automatic splitting can be configured (enabled/disabled) individually for each database table.
In addition to automatically splitting shards, you can create an empty table with a predefined number of shards. You can manually set the exact shard key split range or evenly split it into a predefined number of shards. In this case, ranges are created based on the first component of the primary key. You can set even splitting for tables that have a Uint64
or Uint32
integer as the first component of the primary key.
Partitioning parameters refer to the table itself rather than to secondary indexes built on its data. Each index is served by its own set of shards and decisions to split or merge its partitions are made independently based on the default settings. These settings may become available to users in the future like the settings of the main table.
A split or a merge usually takes about 500 milliseconds. During this time, the data involved in the operation becomes temporarily unavailable for reads and writes. Without raising it to the application level, special wrapper methods in the YDB SDK make automatic retries when they discover that a shard is being split or merged. Please note that if the system is overloaded for some reason (for example, due to a general shortage of CPU or insufficient DB disk throughput), split and merge operations may take longer.
The following table partitioning parameters are defined in the data schema:
AUTO_PARTITIONING_BY_SIZE
- Type:
Enum
(ENABLED
,DISABLED
). - Default value:
ENABLED
.
Automatic partitioning by partition size. If a partition size exceeds the value specified by the AUTO_PARTITIONING_PARTITION_SIZE_MB
parameter, it is enqueued for splitting. If the total size of two or more adjacent partitions is less than 50% of the AUTO_PARTITIONING_PARTITION_SIZE_MB
value, they are enqueued for merging.
AUTO_PARTITIONING_BY_LOAD
- Type:
Enum
(ENABLED
,DISABLED
). - Default value:
DISABLED
.
Automatic partitioning by load. If a shard consumes more than 50% of the CPU for a few dozens of seconds, it is enqueued for splitting. If the total load on two or more adjacent shards uses less than 35% of a single CPU core within an hour, they are enqueued for merging.
Performing split or merge operations uses the CPU and takes time. Therefore, when dealing with a variable load, we recommend both enabling this mode and setting AUTO_PARTITIONING_MIN_PARTITIONS_COUNT
to a value other than 1. This ensures that a decreased load does not cause the number of partitions to drop below the required value, resulting in a need to split them again when the load increases.
When choosing the minimum number of partitions, it makes sense to consider that one table partition can only be hosted on one server and use no more than 1 CPU core for data update operations. Hence, you can set the minimum number of partitions for a table on which a high load is expected to at least the number of nodes (servers) or, preferably, to the number of CPU cores allocated to the database.
AUTO_PARTITIONING_PARTITION_SIZE_MB
- Type:
Uint64
. - Default value:
2000 MB
(2 GB
).
The desired partition size threshold in megabytes. Recommended values range from 10 MB
to 2000 MB
. If this threshold is exceeded, a shard may split. This setting takes effect when the AUTO_PARTITIONING_BY_SIZE
mode is enabled.
This value serves as a recommendation for partitioning. Partitioning may sometimes not occur even if the configured size is exceeded.
AUTO_PARTITIONING_MIN_PARTITIONS_COUNT
- Type:
Uint64
. - Default value:
1
.
Partitions are only merged if their actual number exceeds the value specified by this parameter. When using automatic partitioning by load, we recommend that you set this parameter to a value other than 1, so that periodic load drops don't lead to a decrease in the number of partitions below the required one.
AUTO_PARTITIONING_MAX_PARTITIONS_COUNT
- Type:
Uint64
. - Default value:
50
.
Partitions are only split if their number doesn't exceed the value specified by this parameter. With any automatic partitioning mode enabled, we recommend that you set a meaningful value for this parameter and monitor when the actual number of partitions approaches this value, otherwise splitting of partitions will stop sooner or later under an increase in data or load, which will lead to a failure.
UNIFORM_PARTITIONS
- Type:
Uint64
. - Default value: Not applicable.
The number of partitions for uniform initial table partitioning. The primary key's first column must have type Uint64
or Uint32
. A created table is immediately divided into the specified number of partitions.
When automatic partitioning is enabled, make sure to set the correct value for AUTO_PARTITIONING_MIN_PARTITIONS_COUNT to avoid merging all partitions into one immediately after creating the table.
PARTITION_AT_KEYS
- Type:
Expression
. - Default value: Not applicable.
Boundary values of keys for initial table partitioning. It's a list of boundary values separated by commas and surrounded with brackets. Each boundary value can be either a set of values of key columns (also separated by commas and surrounded with brackets) or a single value if only the values of the first key column are specified. Examples: (100, 1000)
, ((100, "abc"), (1000, "cde"))
.
When automatic partitioning is enabled, make sure to set the correct value for AUTO_PARTITIONING_MIN_PARTITIONS_COUNT to avoid merging all partitions into one immediately after creating the table.
Reading data from replicas
When making queries in YDB, the actual execution of a query to each shard is performed at a single point serving the distributed transaction protocol. By storing data in shared storage, you can run one or more shard followers without allocating additional storage space: the data is already stored in replicated format, and you can serve more than one reader (but there is still only one writer at any given moment).
Reading data from followers allows you:
- To serve clients demanding minimal delay, which is otherwise unachievable in a multi-DC cluster. This is accomplished by executing queries soon after they are formulated, which eliminates the delay associated with inter-DC transfers. As a result, you can both preserve all the storage reliability guarantees of a multi-DC cluster and respond to point read queries in milliseconds.
- To handle read queries from followers without affecting modifying queries running on a shard. This can be useful both for isolating different scenarios and for increasing the partition bandwidth.
- To ensure continued service when moving a partition leader (both in a planned manner for load balancing and in an emergency). It lets the processes in the cluster survive without affecting the reading clients.
- To increase the overall shard read performance if many read queries access the same keys.
You can enable running read replicas for each shard of the table in the table data schema. The read replicas (followers) are typically accessed without leaving the data center network, which ensures response delays in milliseconds.
Parameter name | Description | Type | Acceptable values | Update capability | Reset capability |
---|---|---|---|---|---|
READ_REPLICAS_SETTINGS |
PER_AZ means using the specified number of replicas in each AZ and ANY_AZ in all AZs in total. |
String | "PER_AZ:<count>" , "ANY_AZ:<count>" , where <count> is the number of replicas |
Yes | No |
The internal state of each of the followers is restored exactly and fully consistently from the leader state.
Besides the data state in storage, followers also receive a stream of updates from the leader. Updates are sent in real time, immediately after the commit to the log. However, they are sent asynchronously, resulting in some delay (usually no more than dozens of milliseconds, but sometimes longer in the event of cluster connectivity issues) in applying updates to followers relative to their commit on the leader. Therefore, reading data from followers is only supported in the transaction mode StaleReadOnly()
.
If there are multiple followers, their delay from the leader may vary: although each follower of each of the shards retains internal consistency, artifacts may be observed from shard to shard. Please provide for this in your application code. For that same reason, it's currently impossible to perform cross-shard transactions from followers.
Deleting expired data (TTL)
YDB supports automatic background deletion of expired data. A table data schema may define a column containing a Datetime
or a Timestamp
value. A comparison of this value with the current time for all rows will be performed in the background. Rows for which the current time becomes greater than the column value plus specified delay, will be deleted.
Parameter name | Type | Acceptable values | Update capability | Reset capability |
---|---|---|---|---|
TTL |
Expression | Interval("<literal>") ON <column> [AS <unit>] |
Yes | Yes |
Where <unit>
is a unit of measurement, specified only for column with a numeric type:
SECONDS
MILLISECONDS
MICROSECONDS
NANOSECONDS
For more information about deleting expired data, see Time to Live (TTL).
Renaming a table
YDB lets you rename an existing table, move it to another directory of the same database, or replace one table with another, deleting the data in the replaced table. Only the metadata of the table is changed by operations (for example, its path and name). The table data is neither moved nor overwritten.
Operations are performed in isolation, the external process sees only two states of the table: before and after the operation. This is critical, for example, for table replacement: the data of the replaced table is deleted by the same transaction that renames the replacing table. During the replacement, there might be errors in queries to the replaced table that have retryable statuses.
The speed of renaming is determined by the type of data transactions currently running against the table and doesn't depend on the table size.
Bloom filter
Using a Bloom filter lets you more efficiently determine if some keys are missing in a table when making multiple point queries by primary key. This reduces the number of required disk I/O operations but increases the amount of memory consumed.
Parameter name | Type | Acceptable values | Update capability | Reset capability |
---|---|---|---|---|
KEY_BLOOM_FILTER |
Enum | ENABLED , DISABLED |
Yes | No |
Column-oriented tables
Warning
Column-oriented YDB tables are in the Preview mode.
YDB's column-oriented tables store data of each column separately (independently) from each other. This data storage principle is optimized for handling Online Analytical Processing (OLAP) workloads, as only the columns directly involved in the query are read during its execution. One of the key advantages of this approach is the high data compression ratios since columns often contain repetitive or similar data. A downside, however, is that operations on whole rows become more resource-intensive.
At the moment, the main use case for YDB column-oriented tables is writing data with an increasing primary key (for example, event time), analyzing this data, and deleting outdated data based on TTL. The optimal way to add data to YDB column-oriented tables is batch upload, performed in MB-sized blocks. Data packet insertion is atomic: data will be written either to all partitions or none.
In most cases, working with YDB column-oriented tables is similar to working with row tables, but there are differences:
-
Only
NOT NULL
columns can be used as the primary key. -
Data is partitioned not by the primary key, but by the hash of the partitioning columns, to evenly distribute the data across the hosts.
-
Column-oriented tables support a limited set of data types:
- Available in both the primary key and other columns:
Date
,Datetime
,Timestamp
,Int32
,Int64
,Uint8
,Uint16
,Uint32
,Uint64
,Utf8
,String
; - Available only in columns not included in the primary key:
Bool
,Decimal
,Double
,Float
,Int8
,Int16
,Interval
,JsonDocument
,Json
,Uuid
,Yson
.
- Available in both the primary key and other columns:
Let's recreate the "article" table, this time in column-oriented format, using the following YQL command:
CREATE TABLE article_column_table (
id Int64 NOT NULL,
author String,
title String,
body String,
PRIMARY KEY (id)
)
WITH (STORE = COLUMN);
At the moment, not all functionality of column-oriented tables is implemented. The following features are not currently supported:
- Reading from replicas.
- Secondary indexes.
- Bloom filters.
- Change Data Capture.
- Table renaming.
- Custom table attributes.
- Modifying the list of columns.
- Adding data to column-oriented tables using the SQL INSERT statement.
- Deleting data from column-oriented tables using the SQL DELETE statement. In fact, deletion is only possible after the TTL data retention time has expired.
Partitioning of a column-oriented table
Unlike row-oriented YDB tables, you cannot partition column-oriented tables by primary keys but only by specially designated partitioning keys. Partitioning keys constitute a subset of the table's primary keys.
Example of column-oriented partitioning:
CREATE TABLE article_column_table (
id Int64 NOT NULL,
author String,
title String,
body String,
PRIMARY KEY (id)
)
PARTITION BY HASH(id)
WITH (STORE = COLUMN);
Unlike data partitioning in row-oriented YDB tables, key values are not used to partition data in column-oriented tables. This way, you can uniformly distribute data across all your existing partitions. This kind of partitioning enables you to avoid hotspots at data inserta and speeding up analytical queries that process (that is, read) large amounts of data.
How you select partitioning keys substantially affects the performance of queries to your column-oriented tables. Learn more in Choosing keys for maximum column-oriented table performance.
To manage data partitioning, use the AUTO_PARTITIONING_MIN_PARTITIONS_COUNT
additional parameter. The system ignores other partitioning parameters for column-oriented tables.
AUTO_PARTITIONING_MIN_PARTITIONS_COUNT
sets the minimum physical number of partitions used to store data.
- Type:
Uint64
. - The default value is
1
.
Because it ignores all the other partitioning parameters, the system uses the same value as the upper partition limit.