"To add new management servers to a running MySQL Cluster, it is also
necessary perform a rolling restart of all cluster nodes after modifying
any existing config.ini files. For more information about issues arising
when using multiple management nodes, see Section 184.108.40.206, “Limitations
Relating to Multiple MySQL Cluster Nodes”
> I’ve been looking trough the documentation and searching the web for a
> howto on adding more management nodes to a NDB cluster.
> So far I’ve found absolutely nothing that describes a procedure for this.
> Do any of you have any links that describe this?
> Best regards,
> Asle Ness
> MySQL Cluster Mailing List
> For list archives: http://lists.mysql.com/cluster > To unsubscribe: http://lists.mysql.com/cluster >
Re: Adding management node to running ndb cluster?
For adding more node without downtime, you need high availability setup
such that rolling restart is possible.
The ability to add MySQL Cluster data nodes while the cluster is still
up and running is known as performing an "online" addition of nodes.
This became available in MySQL Cluster starting with the 6.4.0 release
version. Performing an online addition to the cluster allows for
increased availability since there is no downtime and also allows a more
flexible management of the cluster resources.
Certain changes of the Cluster configuration will require a complete
restart of the Cluster and even a backup/restore of the data. Such
configuration changes would include NoOfReplicas and any of the
underlying network configuration such as hostname/ip address etc. Please
check the manual
to see if any of the changes you require will need a system restart or
The steps required to add a new node to a running cluster are:
1. Edit the config.ini file to add new [ndbd] sections corresponding to
the new nodes that will be added. If using more than one management
server, make sure to update the config.ini file on each of the
2. Perform a rolling restart of all the MySQL Cluster management
servers. Make sure you use the /--reload/ or /--initial/ option to
force the new configuration to be read.
3. Perform a rolling restart of all the existing data nodes. It is not
required to use /--initial/ here and doing so is usually not desirable.
4. Perform a rolling restart of any SQL or API nodes.
5. Perform an initial start of the new data nodes that are being added
to the cluster.
6. If new node groups are being added, then execute the /CREATE
NODEGROUP/ command in the management client for the new nodes.
7. Redistribute the cluster's data amongst all the data nodes by
issuing an /ALTER ONLINE TABLE ... REORGANIZE PARTITION/ statement
for each of the NDB tables in your databases.
8. Reclaim the space by issue an /OPTIMIZE TABLE/ statement for each
NDB table or using a NULL alter table statement such as /ALTER TABLE