This FAQ applies to any setup with an mdadm Raid device hosting the data partition. This includes AWS with instance store and other cloud instances.
With ClustrixDB rebooting your VM is possible, however, if you have setup a mdadm raid volume for your clustrixDB data directory, you'd want to ensure that after reboot, the mdadm Raid device is "re-assembled" and the data partition is mounted at the correct location.
By default mdadm will re-assemble any array it finds on the available disks, but it may not assign that array the same device name as before. That is fine as long as you don't count on the device name to mount the filesystem. To work around this you can label the filesystem and mount it by label. Editing fstab is the best way to auto mount. Using an init script is trickier because you'd want it to run before clxnode (the ClustrixDB process) starts.
Below i describe how i would setup the storage from scratch and have it be persistent after reboot (using an AWS instance with instance store) :
#Use super user: sudo su #install mdadm
yum -y install mdadm #create raid device on md0 or whatever name yes | mdadm --create /dev/md0 --level=0 -c64 --raid-devices=2 /dev/xvdf /dev/xvdg # make FS
mkfs -t ext4 /dev/md0 mkdir -p /data/clustrix # Give the partition a label e2label /dev/md0 CLUSTRIX-DATA # Edit fstab to mount it by label instead of device name. This is necessary as the raid array maybe # assigned a different device name (in this case md127) echo 'LABEL=CLUSTRIX-DATA /data/clustrix ext4 defaults,noatime,nodiratime 0 2' >> /etc/fstab # test fstab by running mount: mount -a # The filesystem should be mounted now. Verify with : df -H # Backup mdadm config to its config file. (optional)
mdadm --verbose --detail --scan >> /etc/mdadm.conf