Professional Documents
Culture Documents
2) We must have the cable connect to the port 3 OR 4 which is BOND0 and a mandatory cable for
the port 5 which is our IPMI port.
3) Once the cabling is done to the switch from all 4 nodes. We can power-on the nodes.
4) We will have to be on the login screen on each node.
5) There will be no communication between the nodes or the IPMI IP as they are still holding the
older IP configuration.
6) Connect to each node using a KVM.
7) Login to the node as “rksupport” user
8) We will be sharing the passwords for all the nodes when we start the procedure.
9) After logging in as rksupport, we need to run the below commands to setup the new IPMI IP
configuration, and this step must be done on each node.
• sudo ipmitool lan set 1 ipsrc static
• sudo ipmitool lan set 1 ipaddr <ip_address>
• sudo ipmitool lan set 1 netmask <ip_address>
• sudo ipmitool lan set 1 defgw ipaddr <ip_address>
10) Please keep the new set of IP’s for the IPMI configuration.
11) Once the IPMI IP is changed on all the 4 nodes, we should be able to login to the IPMI using the
browser.
Step-by-step Guide
1. Stop services on all nodes
2. On nodes with wrong IP, fix the bond configuration. Connect to each node through IPv6 link
local, and perform following steps.
This step can also be done manually if preferred. In that case, refer to Section "Standard
form of network configuration file" below for expected configuration.
2.4) restart network and confirm now correct IPs are used. It's recommended to do so one
node at a time, and log in through IPv6 link local during the process.
Please try this 2-3 times if it does not go through. If network still fails to restart, do node
reboot.
Note: these files are actively managed by node monitor based on information from meta
datastore. Keep services down to avoid manual edits to be overwritten.
If 4.10 or 4.11 fails, double check if the new IPs include IP(s) reused from removed nodes. If
that's the case, apply the recovery steps in that section.
Note: until nodes reboot, cockroachdb service will not be able to talk to each other across
nodes. This is OK and expected. Only the local cockroach service is needed for following steps
until reboot.
After nodes reboot (STEP 6 at the end), Node Monitor service will add necessary iptables rules
to allow cockroach (and all other services) communication between nodes.
It was observed in a few cases, iptables was blocking cockroach ports at this step and that
makes above command fail.
here -d option is the local node's data IP, -s is the data IP of other node (one rule per
each other node)
Or we can flush iptables all-together. After node reboot, all iptables rules will be
recreated. (Use this only as last resort. Try above first.)
5. Start services all all nodes, confirm services come up ok, and "rknodestatus" shows all nodes
in OK state.