Skip to main content
Skip table of contents

Requirements for IBM Db2 staging and target environments

This section elaborates on the various host requirements for the staging host where the dSource is linked and the target where the VDB is provisioned based on the type of ingestion method being used.

Requirements

Explanation

Single Partition (Non-DPF)

Multi Partitions (DPF)

Access to database backups

The staging environment must have access to a full online backup of the source database on disk to create the first full copy. Delphix Continuous Data Engine recommends using compressed backups as that will reduce storage needs and speed up ingestion. It is recommended to test the integrity of the backup before using it for ingestion using the utility db2ckbkp. Once this first fill backup is ingested into the staging database, representing the Db2 dSource, subsequent synchronizations can be accomplished via HADR or Archive Logs.

Same as non-DPF

HADR Ingestion - port specific

The configuration values for HADR_LOCAL_SVC and HADR_REMOTE_SVC should be set. These ports are used at the end user's discretion and need to be specified during the linking process. It is highly recommended that these ports also be defined in the /etc/services file to ensure that they are only used by Db2 for the specified databases.

NA

Db2 instance

The staging and target Db2 instances that you wish to use must already exist on the host. We can use the same instance for the dSource and VDB creation and can also have multiple VDBs on the same instance.

Same as non-DPF

Db2 instance configuration parameters

Instance level configuration values such as the bufferpool value will need to be managed by the end users independent of Delphix Continuous Data Engine.

Same as non-DPF

Db2 instance compatibility

The instances used for staging and target environments must be identical with the source Db2 instance in terms of installation version.

Same as non-DPF

No of partitions (Logical)

NA

The staging and target hosts must  be configured with the same number of logical partition nodes as the source.

Logical partition configuration

NA

The logical partition configuration must  be added in the $HOME/sqllib/db2nodes.cfg file where $HOME is the home directory of the instance user. The hostname to be used in the db2nodes.cfg file should be a fully qualified domain name.

Example:

  • db2 nodes.cfg file configured for 4 logical partitions should look like: 

    CODE
    0 <FQDN hostname> 0
    1 <FQDN hostname> 1
    2 <FQDN hostname> 2
    3 <FQDN hostname> 3

/etc/services entry

NA

  • Each logical partition must  have a corresponding port entry in /etc/services of hosts.

  • In the example below, db2inst1 is the instance where the user and port number from 60000 to 60003 are reserved for inter-partition communication.

CODE
DB2_db2inst1 60000/tcp
DB2_db2inst1_1 60001/tcp
DB2_db2inst1_2 60002/tcp
DB2_db2inst1_END 60003/tcp

Registry variable setting

Registry variables DB2RSHCMD and DB2COMM should be updated with values as shown below: 

CODE
SSHCMD=$(which ssh)
db2set DB2RSHCMD="$SSHCMD"
db2set DB2COMM=TCPIP

Same as non-DPF

db2_all utility test

NA

After the environment is configured on staging and target, execute db2_all echo hello on both staging and target servers to add host key in the known_hosts file. This will also help to check for any ssh issues on these servers.

Archive logs

Instance users should have permission to read logs.

The backup files (or log directory if using Archive Logs) should be consistent with the partitions configured on staging.

SSH configuration parameters

NA

When configuring staging and target environments for multiple partitions (dpf), the ClientAliveInterval option must be set to 0 or commented out to prevent the SSH connection from getting severed in the midst of command executions.

Primary user permissions

There must be an operating system user (delphix_os) with these privileges:

  • Ability to login to the staging and target environment via SSH

  • Ability to run mount, umount, mkdir, and rmdir as a super-user. If the staging and target host is an AIX system, permission to run the nfso command as a super-user. 

Same as non-DPF

Toolkit directory and its permissions

There must be a directory on the staging and target environment where you can install the IBM Db2 connector. For example, /var/opt/delphix/toolkit .

  • The primary user (delphix_os) must own the toolkit path directory (__/var/opt/delphix/toolkit).

  • If the primary user and instance users are sharing any common group, then the toolkit path directory (__/var/opt/delphix/toolkit) must have permissions rwxrwx-- (0770), but you can also use more permissive settings. This allows instance users who are part of the same group as the delphix_os user's group to be able to create directories inside the toolkit path directory.

  • If the delphix_os user and instance users are not sharing any common group, then the toolkit path directory must have -rwxrwxrwx (0777) permissions.

  • The directory should have 1.5GB of available storage: 400MB for the connector and 400MB for the set of logs generated by each Db2 instance that runs on the host

  • The connector directory will be used as the base location for the mount point by default if no other custom mount base is provided.

Same as non-DPF

Common group between primary and instance users

The primary environment user should share its primary group with the instance user. For example, if the delphix_os is the primary environment user which is used for environment addition and its primary group is also delphix_os then instance users (responsible for the Delphix Continuous Data Engine operations such as linking and provisioning) should share group delphix_os as their secondary group

Same as non-DPF

Instance user requirement

  • The instance owner of each instance you wish to use within staging or a target host must be added as an environment user within the Delphix Continuous Data Engine. 

  • For HADR synced dSources the staging instance owner must be able to "read" the ingested database contents as Delphix Continuous Data Engine will check the validity of the database by querying tables before each dSource snapshot.
    Note: If a Db2 database tablespace container is added or deleted on primary/source database, the dSource will have to be resynced.

  • Ensure that the following database configurations are set to default values:

Same as non-DPF

DB2_RESTORE_GRANT_ADMIN_AUTHORITIES

If the instance user doesn’t have access to the source database and the staging host uses a different instance user to ingest the dSource, the user can set the DB2 registry variable db2set DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=ON. Db2 will then grant administrative authorities to the staging host instance after the restore is complete. However, since this only works during restore operation, VDBs can only be provisioned on instance with the same name as staging database.

Same as non-DPF

Database level requirement

  • The staging and target hosts must have the empty instances created prior to Delphix Continuous Data Engine using them

  • The desired OS user to execute commands related to dSource and VDB operation for each instance has been added as an environment user.

  • Make sure the staging instance used for linking doesn’t have an existing restoring database with the same name. 

  • The dSource ingestion or VDB provisioning should not be performed on the source instance to maintain the integrity of the source database.

Same as non-DPF

Banners

For connector versions < 4.2.0, Banners are not supported

For connector versions >= 4.2.0, .profile, MOTD, and ssh banners are supported.

Note: Print/echo statements in .bashrc are not supported by the connector.

Same as non-DPF

It is highly recommended that the Database Partitioning Feature (DPF) for IBM Db2 staging and target should be configured on separate hosts due to the reason that the DPF instances in staging or target environments consume a lot of resources.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.