Hi all, it’s been a while but here I am!
There were some changes in the infrastructure at the place I work and I was asked to do a DNS change on a bit old Exadata X5. I had never done one before this, so the idea of this post is to help others who might face the issues I had.
The first thing I did was to look up the documentation about it and see the steps, yes there are blogs about it but the doc can help to get at least the first glance of the situation.
Long story short: Exadata has lots of components and the new DNS should be changed on all of them.
Here is a summary of the steps.
Connect to the switches and sudo to ilom-admin and change the DNS
su - ilom-admin show /SP/clients/dns set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3 show /SP/clients/dns
For my image I only needed to change the /etc/resolv.conf, if you have a newer one you will need to user ipconf – That´s why you need to go to the documentation, at least there we hope that they will put some mentions on the pitfalls (well keep reading and you will see that was not my case)
Also changed the DNS on wach database node ilom, runing the ipmtool from the each node
ipmitool sunoem cli 'show /SP/clients/dns' ipmitool sunoem cli 'set /SP/clients/dns nameserver=192.168.16.1,192.168.16.2,192.168.16.3' ipmitool sunoem cli 'show /SP/clients/dns'
Cell nodes – Here things start to get interesting
For the storage cell there are some points that need to be taken under consideration:
Increase the ASM disk_repair_time – the goal here is to avoid a full rebalance if you do this within its timeframe, if you don’t know this parameter, ASM will wait for up to the interval specified for DISK_REPAIR_TIME for the disk(s) to come online. If the disk(s) come back online within this interval, a resync operation will occur, where only the extents that were modified while the disks were offline are written to the disks once back online. If the disk(s) do not come back within this interval, ASM will initiate a forced drop of the disk(s), which will trigger a rebalance.
On each cell node we need to make sure all disks are OK, stop all cell disks, stop all cell services and user ipconfig to change the DNS configuration
#Check that putting the grid disks offline will not cause a problem for Oracle ASM - it should all say YES on the 3rd column cellcli -e LIST GRIDDISK ATTRIBUTES name,asmmodestatus,asmdeactivationoutcome #Inactivate all grid disks on the cell - may take a while to complete cellcli -e ALTER GRIDDISK ALL INACTIVE #Confirm the grid disks are offline, it should show asmmodestatus=OFFLINE or asmmodestatus=UNUSED, and asmdeactivationoutcome=Yes for all grid disks cellcli -e LIST GRIDDISK ATTRIBUTES name, asmmodestatus,asmdeactivationoutcome #Confirm that the disks are offline cellcli -e LIST GRIDDISK #Shut down the cell services and ocrvottargetd service cellcli -e ALTER CELL SHUTDOWN SERVICES ALL service ocrvottargetd stop #on some images this services does not exists
To execute the ipconf on the old way we only need to call it can follow the prompts, but if you have a newer image you will need to provide its parameters as is shown in the documentation.
The documentation says that after it we could start the cell services back up but I would recommend validating the DNS prior to doing that, why is that you might say because mine did not work and I could have a bigger issue with a cell node without DNS trying to start the services.
So, how to test, use nslookup, dig and curl
nslookup dns_domain.com curl -v 192.168.16.1:53 dig another_server_in_the_network
My tests did not work, I was able to ping the DNS servers but not to resolve any name, I had an SR on MOS but did not help much either, looking up as this is a production system I tried to see if the firewall was up on the Linux site, and to my surprise it was.
I tried to manually add rules to iptables but it did not work and then I came across this note Exadata: New DNS server is not accessible after changing using IPCONF (Doc ID 1581417.1)
And there it was, I needed to restart the cellwall service to recreate the iptables rules.
# Restart cellwall service service cellwall restart service cellwall status
One final point, check if ASM started the rebalance or not, if it did, do not start to bring down another cell node until the rebalance is finish, otherwise you may run into deeper issues.
I hope it helps!