Importing table structures and data from JDBC data sources to YDB
Introduction
Table structures and data can be imported into YDB from data sources accessible via JDBC drivers using the JDBC import tool. This can be useful for migrating from other database management systems to YDB or just importing your real data to run some benchmarks on YDB.
Data import is performed using the following process:
- The tool connects to the source database and determines the list of tables and custom SQL queries to be imported.
- The structure of target YDB tables is generated and optionally saved as YQL script.
- The target database is checked for the existence of the target tables. Missing tables are created, and already existing ones may be left as is or re-created.
- Row data is imported by reading from the source database using the SELECT statements and written into the target YDB database using the Bulk Upsert mechanism.
All operations, including metadata extraction from the source database, table creation (and re-creation) in the target database, and row data import, are performed in parallel mode (thread-per-table), using multiple concurrent threads and multiple open connections to both source and target databases. The maximum degree of parallelism is configurable, although the number of concurrent operations can't exceed the number of imported tables or custom SQL queries.
Import tool installation
To run the import tool, JDK 8 or later is required, with the java
executable available in the program search path. The tool has been built and tested using OpenJDK 8.
Import tool binary distribution is available on the Releases page. The import tool is provided as the ZIP archive, which needs to be unpacked into the local directory on the server where the tool is required to run.
The import tool distribution archive contains the configuration file examples as *.xml
files, sample tool startup script ydb-importer.sh
, and the executable code for the tool and its dependencies (including the YDB Java SDK) as lib/*.jar
files.
Before running the import tool, the JDBC drivers for the source databases should be put (as *.jar
files) into the lib
subdirectory of the tool installation directory.
Using the import tool
The tool reads the XML configuration file on startup. When running the tool, the file name must be specified as a free command line argument. Settings in the configuration file define:
- source database type and connection preferences;
- YDB target database connection preferences;
- (optional) file name to save the generated YQL script with YDB target tables structure;
- the rules to filter source tables' names;
- the list of custom SQL queries to fetch data from the data source (can be used to define custom queries to get data from all or some of the tables);
- the rules to generate target tables' names based on the names of the source tables;
- the degree of parallelism (which determines the worker pool size, plus the connection pool sizes for both source and target databases).
The configuration file based on one of the provided samples needs to be prepared before running the import tool.
The example command to run the tool is provided in the ydb-importer.sh file, which can also be used to run the tool as shown below:
./ydb-importer.sh my-import-config.xml
Import tool limitations
The tested data sources include the following database management systems:
Other data sources will likely work, too, as the tool uses the generic JDBC APIs to retrieve data and metadata.
Secondary indexes are not imported.
Some data types and table structures are known to be unsupported:
- the embedded tables for Oracle Database
- spatial data type for Microsoft SQL Server
- object data types for Informix
Handling tables without the primary key
Each YDB table must have a primary key. If a primary key (or at least a unique index) is defined on the source table, the tool creates the primary key for the target YDB table with the columns and order determined by the original primary key. When multiple unique indexes are defined on the source table, the tool prefers the index with the smallest number of columns.
In case the source table has no primary key and no unique indexes, primary key columns can be configured using the import settings in the configuration file as a sequence of key-column
elements in the table-ref
section (see below the example in the description of the configuration file format).
When no primary key is defined anywhere, the tool automatically adds the column ydb_synth_key
to the target table and uses it as the primary key. The values of the synthetic primary key are computed as "SHA-256" hash code over the values of all row cells except the columns of the BLOB
type.
If the input table contains several rows with completely identical values, the destination table will have only one row per each set of duplicate input rows. This also means that the tool cannot import the table in which all columns are of BLOB
type.
Importing large objects (BLOB, XML)
For each BLOB field in the source database, the import tool creates an additional YDB target table (BLOB supplemental table) with the following schema:
CREATE TABLE `blob_table`(
`id` Int64,
`pos` Int32,
`val` String,
PRIMARY KEY(`id`, `pos`)
)
The name of the BLOB supplemental table is generated based on the table-options
/ blob-name-format
setting in the configuration file.
Each BLOB field value is saved as a sequence of records in the BLOB supplemental table in YDB. Actual data is stored in the val
field, storing no more than 64K bytes. The order of blocks stored is defined by the values of the pos
column, containing the integer values 1..N
.
For each source BLOB value, a unique value of type Int64
is generated and stored in the id
field of the BLOB supplemental table.
This identifier is also stored in the "main" target table in a field that has the same name as the BLOB field in the source table.
For PostgreSQL, working with the EXTENSION lo
BLOBS may require extra permissions for the account used to connect to the data source. An alternative option is to enable the compatibility mode on the database level using the following statement:
ALTER DATABASE dbname SET lo_compat_privileges TO on;
Configuration file format
Sample configuration files:
Below is the explanation of the configuration file structure:
<?xml version="1.0" encoding="UTF-8"?>
<ydb-importer>
<workers>
<!-- Number of worker threads (integer starting with 1).
This setting also defines the maximum number of source and target database sessions.
-->
<pool size="4"/>
</workers>
<!-- Source database connection parameters.
type - the required attribute defining the type of the data source
-->
<source type="generic|postgresql|mysql|oracle|mssql|db2|informix">
<!-- JDBC driver class name to be used. Typical values:
org.postgresql.Driver
com.mysql.cj.jdbc.Driver
org.mariadb.jdbc.Driver
oracle.jdbc.driver.OracleDriver
com.microsoft.sqlserver.jdbc.SQLServerDriver
com.ibm.db2.jcc.DB2Driver
com.informix.jdbc.IfxDriver
-->
<jdbc-class>driver-class-name</jdbc-class>
<!-- JDBC driver URL. Value templates:
jdbc:postgresql://hostname:5432/dbname
jdbc:mysql://hostname:3306/dbname
jdbc:mariadb://hostname:3306/dbname
jdbc:oracle:thin:@//hostname:1521/serviceName
jdbc:sqlserver://localhost;encrypt=true;trustServerCertificate=true;database=AdventureWorks2022;
jdbc:db2://localhost:50000/SAMPLE
jdbc:informix-sqli://localhost:9088/stores_demo:INFORMIXSERVER=informix
-->
<jdbc-url>jdbc-url</jdbc-url>
<username>username</username>
<password>password</password>
</source>
<!-- Target YDB database connection parameters. -->
<target type="ydb">
<!-- If the following tag is defined, the tool will import
The YQL script is used to generate YDB tables in the file specified.
It can also be used without specifying the connection string,
if the actual target schema creation is not required.
-->
<script-file>sample-database.yql.tmp</script-file>
<!-- Connection string: protocol + endpoint + database. Typical values:
grpcs://ydb.serverless.yandexcloud.net:2135?database=/ru-central1/b1gfvslmokutuvt2g019/etn63999hrinbapmef6g
grpcs://localhost:2135?database=/local
grpc://localhost:2136?database=/Root/testdb
-->
<connection-string>ydb-connection-string</connection-string>
<!-- Authentication mode:
ENV Configure authentication via environment variables
NONE Anonymous access, or static credentials in the connection string
STATIC Static credentials defined as login and password properties (see below)
SAKEY Service account key file authentication for YDB Managed Service
METADATA Service account metadata authentication for YDB Managed Service
-->
<auth-mode>ENV</auth-mode>
<!--
For managed YDB in Yandex Cloud, authentication parameters can be specified
in the environment variables, as specified in the documentation:
https://ydb.tech/en/docs/reference/ydb-sdk/auth#env
In case the Service Account authentication is used, either explicitly
or through the YDB_SERVICE_ACCOUNT_KEY_FILE_CREDENTIALS env, the key file
must be generated as written in the following document:
https://cloud.yandex.com/en/docs/iam/operations/authorized-key/create
-->
<!-- Custom TLS certificates, if needed -->
<tls-certificate-file>ca.crt</tls-certificate-file>
<!-- For auth-mode: STATIC -->
<static-login>username</static-login>
<static-password>password</static-password>
<!-- For auth-mode: SAKEY -->
<sa-key-file>ydb-sa-keyfile.json</sa-key-file>
<!-- Drop the already existing tables before loading the data -->
<replace-existing>false</replace-existing>
<!-- Should the tool actually load the data after creating tables? -->
<load-data>true</load-data>
<!-- Maximum rows per bulk upsert operation
(used for writing into the main table) -->
<max-batch-rows>1000</max-batch-rows>
<!-- Maximum rows per blob bulk upsert operation
(used for writing into the supplemental BLOB fields tables) -->
<max-blob-rows>200</max-blob-rows>
</target>
<!-- Table name and structure conversion rules.
Each rule is defined under a distinct name, and later referenced in the table mappings.
-->
<table-options name="default">
<!-- Table name case conversion mode: ASIS (no changes, used by default), LOWER, UPPER. -->
<case-mode>ASIS</case-mode>
<!-- The template used to generate the full destination YDB table name,
including the directory to put the table in.
The values ${schema} and ${table} are used to substitute
the schema and table name of the source table. -->
<table-name-format>oraimp1/${schema}/${table}</table-name-format>
<!-- The template used to generate the name of the supplemental table
to store source BLOB field's data in YDB.
The values ${schema}, ${table} and ${field}
are used for source schema, table and BLOB field names. -->
<blob-name-format>oraimp1/${schema}/${table}_${field}</blob-name-format>
<!-- Date and timestamp data type values conversion mode.
Possible values: DATE (use YDB Date datatype, default), INT, STR.
DATE does not support input values before January, 1, 1970,
and there will be import errors for such values in the source
INT saves date as 32-bit integer YYYYMMDD for dates,
and as a 64-bit milliseconds since epoch for timestamps.
STR saves dates as character strings (Utf8) in format "YYYY-MM-DD",
and in "YYYY-MM-DD hh:mm:ss.xxx" for timestamps.
-->
<conv-date>INT</conv-date>
<conv-timestamp>STR</conv-timestamp>
<!-- If true, columns with unsupported types are skipped with warning,
otherwise import error is generated, and the whole table is skipped. -->
<skip-unknown-types>true</skip-unknown-types>
</table-options>
<!-- Table map filters the source tables and defines the conversion modes for them. -->
<table-map options="default">
<!-- Schemas to include, can be specified as regular expression or literal value -->
<include-schemas regexp="true">.*</include-schemas>
<!-- Schemas to exclude (of those included), as literal or regular expression -->
<exclude-schemas>SOMESCHEMA</exclude-schemas>
<!-- Particular tables can be included or excluded using the
"include-tables" and "exclude-tables" tags,
using literals or regular expressions as well. -->
</table-map>
<!-- Table reference may refer to a particular table in a particular schema
and optionally specify the query to execute. The latter option allows to import
virtual tables that do not actually exist on the data source. -->
<table-ref options="default">
<schema-name>ora$sys</schema-name>
<table-name>all_tables</table-name>
<!-- If the query text is defined, it is executed as shown. -->
<query-text>SELECT * FROM all_tables</query-text>
<!-- Primary key column names should be defined for the query result. -->
<key-column>OWNER</key-column>
<key-column>TABLE_NAME</key-column>
</table-ref>
</ydb-importer>