This is an old revision of the document!
AhsayUBS (Ahsay Universal Backup System) is a low price yet powerful operating system for a backup appliance to operate with. It has been optimized for AhsayCBS to run smoothly on it. With AhsayUBS, you can get AhsayCBS deployed onto a bare server hardware within a few minutes.
It is a customized version of FreeNAS firmware with AhsayCBS bundled and is specifically optimized to run AhsayCBS. Apart from AhsayCBS, it also contains some basic features that system administrator require, e.g. SSH and system monitoring tools.
For backwards compatibility with older AhsayUBS versions, the UFS storage model is also supported. After upgrade, the 'geom_concat.ko', 'geom_stripe.ko', and 'geom_raid5.ko' module will be loaded by the FreeBSD to support the UFS storage model. To check if these kernel modules have been loaded correctly you can run the “kldstat” command, which will return the following output.
The 'Master Storage Device' on AhsayUBS is preserved in UFS format which is mounted on '/ubs/mnt/eslsfw' upon system boot time. The following example shows a UFS filesystem mount as '/ubs/mnt/eslsfw'.
The Optional Labelled Device in the legacy AhsayUBS will migrated in this version of AhsayUBS which is one of the storage types called “Optional Storage” inside the “Additional Storage”. Volume status and UFS filesystem integrity checking (fsck) are also available in this AhsayUBS version. For details, please refer to the section [Storage].
AhsayUBS is implemented with ZFS v5 and ZPOOL v28. The existing ZPOOL(s) will not be upgraded and only newly created ZPOOL will be applied with the ZIL (ZFS Intent Log).
As the ZFS storage model is based on a GMIRROR and ZFS design, therefore the 'geom_mirror.ko', 'opensolaris.ko', and 'zfs.ko' kernel modules will be loaded by the FreeBSD. The GEOM kernel modules used previously for UFS support 'geom_concat.ko', 'geom_stripe.ko', and 'geom_raid5.ko' will also be loaded. To check if these kernel modules have been loaded correctly you can run the “kldstat” command, which will return the following output.
The 'Master Storage Device' on AhsayUBS is configured as a ZPOOL with the following pool name 'eslsfwx{UID}' format. The ZFS pool will be mounted on '/ubs/mnt/eslsfw' upon system boot time. The following example shows a zpool volume of size 191GB “eslsfwx839830C2” mount as '/ubs/mnt/eslsfw'
For volume status and ZFS filesystem integrity checking, please refer to the section [Storage] for details
The other “esgpbt”, “esosfw”, and “esfmfw” System Firmware Devices are still mounted from the /etc/fstab file.
The ZFS storage model is used for the following AhsayCBS locations:
The other “System Firmware Devices” such as “esgpbt”, “esosfw”, and “esfmfw” will remain unchanged as GEOM MIRROR based UFS volumes. The GEOM device names are in the following formats:
For production AhsayUBS servers configured with ZFS volume(s). It is strongly recommended to install at least 4 GB RAM, as ZFS volumes require relatively large amount of memory to run. The amount of memory required is dependent on the size of the ZFS volume and the amount of I/O activity.
In order to safeguard the data integrity of the files on the ZFS volume, a weekly “zpool scrub” (zpool volume data integrity check) is performed starting at 00:00 every Sunday morning, to verify the checksums of all the data in the specified ZFS pools are correct.
The scheduled started time of the “zpool scrub” is currently not user configurable and it cannot be disabled in this version of AhsayUBS.
Once the “zpool scrub” job has started it is not possible to stop it.
To check the status of the “zpool scrub”, you can use the “zpool status” command which will return the following output. For the following example the “zpool scrub” has checked 56.33% of the pool: eslsfwx839830C2
If an additional data integrity check is required in between the scheduled weekly checks. It is possible to initiate a manual “zpool scrub” using the “zpool scrub {% POOL_NAME%}” command.
As with the weekly “zpool scrub”, the AhsayCBS service and backup/restore operations can continue to run as normal.
There may be some performance overhead associated with a “zpool scrub”, i.e. CPU utilization, memory, and increased I/O activity. The performance overhead is proportional to the amount of data on the ZFS volume.
The ZFS version 5 and ZPOOL v28 on AhsayUBS has undergone an extended period of intensive performance and load testing, which has consistently delivered superior performance and data integrity results in comparison to UFS.
For legacy AhsayUBS environments who wish to migrate from UFS to ZFS storage model, only a manual migration method is available where you need to offload your locally stored User Home data, AhsayUBS setting, and AhsayCBS settings; to another temporary storage device, then reinstall AhsayUBS from new, then reload your data and settings back.
The migration process will generally involve:
The process to setup AhsayUBS firmware onto a machine is done in four stages:
There are different software/hardware requirements for each stage. Please ensure that all the requirements are met before deploying the AhsayUBS to the machine.