Master/slave synchronization and clusters

In Bitvise SSH Server versions 6.xx and higher, the SSH server can be run in master/slave mode, which facilitates its use in a cluster or a large-scale deployment.

The scope of the master/slave feature is to automate synchronization of SSH server settings between SSH servers. It is intended for use in environments where administrators would like to apply settings changes on one server (the master), and have the changes automatically propagate to others (slaves). The master/slave feature does not interact with solutions for server monitoring or load balancing. If your deployment requires e.g. load balancing, you will need an external solution for that.

To cause some or all aspects of the SSH server's configuration to be automatically reproduced from a primary installation to one or more secondary installations, use the Instance type feature in Bitvise SSH Server Control Panel to configure the primary installation as the master. Then, configure secondary installations to run as slaves, and retrieve configuration changes from the master.

A typical cluster installation may wish a secondary server to appear identical to a primary server to its users. To achieve this, a slave would reproduce all aspects of the SSH server's configuration: settings, host keys, and password cache. The aspects of SSH server configuration that will be copied from the master are configured in the Instance Type dialog for each slave installation.

Configuring master/slave synchronization

Master/slave synchronization is configured through the Instance type setting in the Bitvise SSH Server Control Panel (top right corner of the Server tab). The following steps are required:
  1. On the master server:
    1. Set instance type to Master, and configure a password which slave SSH servers will be required to present in order to synchronize settings from the master. We highly recommend configuring a long, secure, randomly generated password as described on this page.
    2. Use the Manage host keys interface to export the public keys of all host keys used by the SSH server. Alternately, write down the master's employed host key fingerprints so that you can enter them manually into slave configuration.
  2. On slave servers:
    1. Set instance type to Slave.
    2. Import the master's host keys through the Host and fingerprints setting. Alternately, use Add Fp to add a master's host key fingerprint, without importing the key.
    3. Enter the master's network address and port, and set the synchronization password to match the one configured on the master.
    4. In the remaining slave settings, configure which aspects of SSH server settings to synchronize from the master. Host keys can be synchronized from the master only if this is permitted in master settings.
    5. If you enable Auto-manage trusted host keys, the slave server will automatically add to its "Host keys and fingerprints" setting any new host keys generated on the master, assuming they haven't yet been employed. If the host key is already employed when it is first seen by the slave, the slave will not be able to connect regardless of this setting, because it has no previous knowledge of the key.

If a node fails...

If a slave goes down, then either the master and/or any other slaves will remain up. There will be nodes to handle connections, and it will remain possible to administer SSH Server settings for the cluster through the master. If the slave that failed is brought back online, it will re-synchronize.

If the master goes down, a slave will not automatically become a master. The master needs to be brought back online. Otherwise, an administrator needs to reconfigure the nodes in the cluster so that a different server will serve as master. While the master is down, changing SSH Server settings for the cluster through the master will not be possible, but slaves will continue to operate according to the last settings they received from the master. When the master is brought back online, slaves will re-synchronize.

Upgrading servers in a master/slave configuration

For a master/slave arrangement to function, servers that receive synchronization data must use an SSH Server version equal to, or greater than, the server from which they obtain the data.

Servers in a master/slave configuration should therefore be upgraded in the following order:

  1. Slave servers first.
  2. Secondary masters second.
  3. Master server last.

If a master server is upgraded to an SSH Server version that uses a settings format newer than some of the slaves, those slaves will no longer be able to synchronize with that master. However, newer versions of slaves will continue to recognize settings from an older master.

Unattended slave installation

If you would like to script several SSH Server slave installations, so that they can be performed unattended, the first preparatory step is to use the graphical SSH Server Control Panel on an exemplary slave installation to configure settings for a typical slave. This includes a step to import master host keys. Once the settings are configured and saved, use the same interface to export instance type settings into a file. For example, we assume the file is named BvSshServerSlave.wit.

On slaves you want to script, the next step is to perform a normal unattended SSH Server installation, which can be done independent of instance type. This is described on the page Installing Bitvise SSH Server.

Once the SSH Server is installed, you can use the utility BssCfg, which can be found in the SSH Server installation directory, to import slave settings from the command line, as follows:

BssCfg instanceType importBin C:\Path\BvSshServerSlave.wit

This command needs to be run in an elevated, administrative Command Prompt or PowerShell session.

Once this completes, the SSH Server is configured as a slave, and can be started.