Life Preserver/10.0

From PC-BSD Wiki
Revision as of 11:35, 31 December 2013 by Drulavigne (Talk | contribs)

Jump to: navigation, search

The built-in Life Preserver utility was designed to take full advantage of ZFS snapshot functionality. This utility allows you to schedule snapshots of a local ZFS pool and to optionally replicate those snapshots to another system using SSH. This design provides several benefits:

  • a snapshot provides a "point-in-time" image of the ZFS pool. In one way, this is similar to a full system backup as the snapshot contains the information for the entire filesystem. However, it has several advantages over a full backup. Snapshots occur instantaneously, meaning that the filesystem does not need to be unmounted and you can continue to use applications on your system as the snapshot is created. Since snapshots contain the meta-data ZFS uses to access files, the snapshots themselves are small and subsequent snapshots only contain the changes that occurred since the last snapshot was taken. This space efficiency means that you can take snapshots often. Snapshots also provide a convenient way to access previous versions of files as you can simply browse to the point-in-time for the version of the file that you need. Life Preserver makes it easy to configure when snapshots are taken and provides a built-in graphical browser for finding and restoring the files within a snapshot.
  • replication is an efficient way to keep the files on two systems in sync. In the case of Life Preserver, the snapshots taken on the PC-BSD system will be synchronized with their versions stored on the backup server.
  • SSH means that the snapshots will be sent to the backup server oven an encrypted connection, which protects the contents of the snapshots.
  • having a copy of the snapshots on another system makes it possible to restore the pool should your PC-BSD® system become unusable.

If you decide to replicate the snapshots to a backup server, keep the following points in mind when choosing which system to use as the backup server:

  • the backup server must be formatted with the latest version of ZFS, also known as ZFS feature flags or ZFSv5000. Operating systems that support this version of ZFS include PC-BSD® and FreeBSD 9.2 or higher, and FreeNAS 9.1.x or higher.
  • that system must have SSH installed and the SSH service must be running. If the backup server is running PC-BSD, you can start SSH using Service Manager. If that system is running FreeNAS, SSH is already installed and how to configure this service is described in Backing Up to a FreeNAS System. If the system is running FreeBSD, SSH is already installed, but you will need to start SSH.
  • if the backup server is running PC-BSD, you will need to open TCP ports 22 (SSH) using Firewall Manager. If the server is running FreeBSD and a firewall has been configured, add rules to open this port in the firewall ruleset. FreeNAS does not run a firewall by default.
Figure 8.21a: Life Preserver Icon in System Tray


Using the Life Preserver GUI

An icon to the Life Preserver utility, seen in Figure 8.21a, can be found in the system tray.

To remove the icon from the system tray, right-click it and select “Close Tray”. To re-add it to the tray, go to Control Panel → Life Preserver or type pc-su life-preserver & at the command line. If your desktop manager does not provide a system tray, you will need to instead manage backups from the command line.

To open the screen shown in Figure 8.21b, right-click the Life Preserver icon, select "Open Life Preserver", and input your password.

Initially, this screen will be greyed out as you have not yet created a backup schedule. To do so, click File → Manage Pool and select the name of the pool to manage. The following examples are for a pool named tank. This will launch the the "New Life Preserver Wizard", allowing you to configure the backup schedule. Click "Next" to see the screen in Figure 8.21c.

Figure 8.21b: Life Preserver Screen
Figure 8.21c: Snapshot Schedule Screen

This screen is used to schedule how often a snapshot is taken of the system. The default is to perform one snapshot per day at 1:00 AM. You can either change the time that this one daily snapshot occurs or select to take a snapshot once every hour, 30 minutes, 10 minutes or 5 minutes.

After making your selection, press "Next" to see the screen shown in Figure 8.21d.

Figure 8.21d: Snapshot Pruning Screen

This screen schedules how long to keep the created snapshots on the PC-BSD® system. By default, the last 7 days of snapshots are stored. In other words, once a snapshot becomes older than 7 days, it is automatically deleted. You can select to either keep snapshots for so many days or to keep a certain quantity of snapshots.

After making your selection, press "Next" to see the screen shown in Figure 8.21e.

Figure 8.21e: Replication Server Screen

If you wish to keep a copy of the snapshots on another system, this screen is used to indicate which system to send the snapshots to. If you do not have another system available, you can click “Next” and then “Finish” to complete the configuration.

If you do have another system available which is running the same version of ZFS and has SSH enabled, click the "Replicate my data" box, then input the following information. Before entering the information in these fields, you need to first configure the backup system. An example configuration is demonstrated in Backing Up to a FreeNAS System.

  • Host Name: of the remote system that will store your backup. If the backup server is on your local network, the host name must be in your hosts file or in the database of the local DNS server. You may find it easier to instead input the IP address of the backup server as this will eliminate any host name resolution problems.
  • User Name: this user must have permission to log in to the system that will hold the backup. If the account does not already exist, you should create it first on the backup server.
  • SSH Port: port 22, the default port used by SSH is selected for you. You only need to change this if the remote system is using a non-standard port to listen for SSH connections. In that case, use the up/down arrows or type in the port number.
  • Remote Dataset: input the name of an existing ZFS dataset on the backup server. This is where the backups will be stored. To get a list of existing datasets, type zfs list on the remote server. The "NAME" column in the output of that command gives the fullname of each dataset. Type the fullname of the desired dataset into this field. When selecting a dataset, make sure that the selected "User Name" has permission to write to the dataset.
  • Frequency: snapshots can either be sent the same time that they are created or you can set at the time when the queued snapshots are sent.

Once you have input the information, click "Next" and then "Finish". Life Preserver will check that it can connect to the backup server and will prompt for the password of "User Name". A second pop-up message will remind you to save the SSH key to a USB stick (as described below) as this key is required should you ever need to perform an operating system restore.

NOTE: If you don't receive the pop-up message asking for the password, check that the firewall on the backup system, or a firewall within the network, is not preventing access to the configured "SSH Port".

The rest of this section demonstrates the tasks that can be performed from the Life Preserver GUI now that the pool has an initial configuration.

View Menu and Configure Button

Once the schedule for tank has been created, the "Status" tab shown in Figure 8.21f will become active and will show the current state of the pool. The "View" menu lets you select "Basic" or "Advanced" view. "Advanced" view has been selected in the example shown in Figure 8.21f.

Figure 8.21f: Life Preserver in Advanced View

In this example, the ZFS pool is active, is comprised of one disk, and the date and time of the last snapshot is displayed. The green status indicates that the latest scheduled replication was successful.

If you click the "Configure" button, the screen shown in Figure 8.21g will open. This allows you to modify the settings of the replication server in the "Replication" tab and to change the schedule and pruning options in the "Local Snapshots" tab.

Figure 8.21g: Modifying the Configuration

Restore Data Tab

The "Restore Data" tab, seen in Figure 8.21h, is used to view the contents of the local snapshots and to easily restore any file which has since been modified or deleted.

Figure 8.21h: Viewing the Contents of the Snapshots

In this example, the system has been configured to make a snapshot every 5 minutes. Since files have been modified on this system, the blue time slider bar indicates that several snapshots are available as a snapshot only occurs if changes have been made within the scheduled time increment. Click the arrows to go back or forward one snapshot at a time. Alternately, click the slider until you are viewing the desired time of the snapshot.

Once you have selected the desired date and time, use the drop-down menu to select the portion of the filesystem to view. In this example, the user has selected /usr/home/dru as that is the user's home directory. The user can now expand the directory names to view the files within each directory.

If your intent is to restore an earlier version of a file or a file that has been deleted, go back to the desired date and time, highlight the file, and click the "Restore" button. A copy of that file as it appeared at that point in time will be created in the same directory, with -reversion# added to the filename, where # is incremented if multiple revisions of that filename already exist. This way, any current version or restored version of the file will never be overwritten.

File Menu

The "File" menu contains the following options:

  • Manage Pool: this will be greyed out if you have already configured your ZFS pool. If you have a second ZFS pool, you can select this option in order to start the Life Preserver Configuration Wizard for that pool.
  • Unmanage Pool: if you wish to disable ZFS snapshots, select the ZFS pool name. Pop-up menus will ask if you are sure and then ask if you also want to delete the local snapshots from the system. If you choose to delete these snapshots, you will lose all of the older versions of the files contained in those backups. Once you have unmanaged a pool, you will need to use "Manage Pool" to rerun the Life Preserver Configuration Wizard for that pool.
  • Save Key to USB: when you configure the replication of local snapshots to a remote system, you should immediately copy the automatically generated SSH key to a USB stick. Insert a FAT32 formatted USB stick and wait for Mount Tray to mount it. Then, click this option to copy the key.
  • Close Window: closes the Life Preserver window. However, Life Preserver will continue to reside in the system tray.

Classic Backups Menu

This menu can be used to create an as-needed tarball of the user's home directory. This can be handy if you would like to make a backup of just your home directory in order to restore it on another system.

To make a tar backup, select the name of the user in the screen shown in Figure 8.21i.

Figure 8.21i: Backing Up a User's Home Directory

A pop-up menu will ask for the name of the archive to create. By default it will be in the format username-YYYYMMDD-HHMM. Press OK to start the backup. It will take a few minutes, depending upon the size of the home directory. Once finished, a message will indicate that the file was saved to /usr/home/.

The "Extract Home Dir" option can be used to restore a previously made home directory backup. Be sure this is what you want to do before using this option, as it will overwrite the current contents of the user's home directory. If your goal is to restore files without destroying the current versions, use the Restore Data Tab instead.

Snapshots Tab

The snapshots tab allows you to create or delete snapshots outside of the configured schedule. This tab contains two options:

  • New Snapshot: click this button to create a snapshot now, instead of waiting for the schedule. For example, you can create a snapshot before making changes to a file, so that you can preserve a copy of the previous version of the file. Or, you can create a snapshot as you make modifications to the system or upgrade software. When creating a snapshot, a pop-up message will prompt you to input a name for the snapshot, allowing you to choose a name that is useful in helping you remember why you took the snapshot.
  • Delete Snapshot: selecting this option will display the list of locally stored snapshots, listed in order from the oldest to the newest. If you select a snapshot, a warning will remind you that this is a permanent change that can not be reversed. In other words, the versions of files at that point in time will be lost.

Disks Tab

Using the Command Line Version of Life Preserver

The lpreserver command line utility can be used to manage snapshots and replication from the command line of a PC-BSD® or TrueOS® system. This command needs to be run as the superuser. To display its usage, type the command without any arguments:


Life-Preserver --------------------------------- Available commands   cronsnap - Schedule snapshot creation via cron        get - Get list of lpreserver options   listcron - Listing of scheduled snapshots   listsnap - List snapshots of a zpool/dataset     mksnap - Create a ZFS snapshot of a zpool/dataset  replicate - Enable / Disable ZFS replication to a remote system revertsnap - Revert zpool/dataset to a snapshot     rmsnap - Remove a snapshot        set - Set lpreserver options     status - List datasets, along with last snapshot / replication date

     zpool - Manage a zpool by attaching / detaching disks

Each command has its own help text that describes its parameters and provides a usage example. For example, to receive help on how to use the lpreserver cronsnap command, type:

lpreserver help cronsnap

Life-Preserver --------------------------------- Help cronsnap Schedule a ZFS snapshot Usage: For a listing of all scheduled snapshots  # lpreserver listcron  or  To start / stop snapshot scheduling  # lpreserver cronsnap <dataset> <action> <frequency> <numToKeep>  action = start / stop  frequency = daily@XX / hourly / 30min / 10min / 5min                    ^^ Hour to execute  numToKeep = Number of snapshots to keep total Example:  lpreserver cronsnap tank1/usr/home/kris start daily@22 10  or

 lpreserver cronsnap tank1/usr/home/kris stop

Table 8.21a shows the command line equivalents to the graphical options provided by the Life Preserver GUI. Note that some options are only available from the command line.

Table 8.218.21a: Command Line and GUI Equivalents [Tables 1]
Command Line GUI Description
cronsnap Customize the backup configuration ➜ Local Snapshots schedule when snapshots occur and how long to keep them; the "stop" option can be used to disable snapshot creation
get list Life Preserver options
listcron main screen of Life Preserver list which ZFS pools have a scheduled snapshot
listsnap Browse a snapshot list snapshots of specified dataset
mksnap Make a new snapshot create and replicate a new ZFS snapshot; by default, snapshots are recursive, meaning that a snapshot is taken of every dataset within a pool
replicate Customize the backup configuration ➜ Replication used to list, add, and remove backup server; read the help for this command for examples
revertsnap Revert an entire data subset revert dataset to the specified snapshot version
rmsnap Remove selected dataset from automatic backup deletes specified snapshot; by default, all datasets within the snapshot are deleted
set configures Life Preserver options; read help for the list of configurable options
status main screen of Life Preserver lists the last snapshot name and replication status
zpool used to attach/detach drives from the pool; read help for examples

Mirroring the System to a Local Disk

In addition to replicating to a remote server, the lpreserver command also provides a method for attaching a new disk drive to an existing ZFS pool, and live-mirroring all data to that disk as data changes on the pool. The attached disk drive can be another internal disk or an external USB disk. When the new disk is attached for the first time, it will be erased and used solely as a mirror of the existing system drive. In addition, it will be made bootable, allowing you to boot from and use the new disk should the primary disk fail. In order to use this feature you will need the following:

  • an internal or external disk drive that is the same size or larger than the existing system disk.
  • since the disk will be formatted, it must be either blank or not have any data that you wish to keep intact.
  • in order to boot from the disk should the primary disk fail, the system must support booting from the new disk. For example, if you are using a USB disk, make sure that the BIOS is able to boot from a USB disk.

The superuser can setup the new disk using the following command. Replace tank1 with the name of your ZFS pool and /dev/da0 with the name of the disk to format. For example, the first USB disk will be /dev/da0 and the second internal hard disk will be /dev/ad1.

lpreserver zpool attach tank1 /dev/da0

When the disk is first attached, it will be formatted with ZFS and configured to mirror the size of the existing disk. GRUB will also be stamped on the new disk, making it bootable should another drive in the array go bad. You can add multiple disks to the pool in this manner, giving any level of redundancy that you require.

Once the disk is attached, it will begin to resilvering. This process mirrors the data from the primary disk to the newly attached disk. This may take a while, depending upon the speed of the disks and system load. Until this is finished you should not reboot the system, or detach the disk. You can monitor the resilvering process by typing zpool status.

To get a listing of the disks in your mirror, run this command, replacing tank1 with the name of the pool:

lpreserver zpool list tank1

If you are using an external drive, there may be occasions where you wish to disconnect the backup drive, such as when using a laptop and going on the road. In order to so this safely, it is recommended that you first offline the external disk using the following command:

lpreserver zpool offline tank1 /dev/da0

Then when you re-connect the drive, you can place it in online mode again using:

lpreserver zpool online tank1 /dev/da0

Sometimes, the disk name will change as a result of being disconnected. The lpreserver zpool list tank1 command can be used to get the proper device ID.

If you wish to permanently remove a disk from the mirror, run the following command. If you decide to re-attach this disk later, a full disk copy will again have to be performed.

lpreserver zpool detach tank1 /dev/da0
NOTE: In addition to working with mirrors, the lpreserver zpool command can also be used to manage a RAIDZ configuration, although you will probably not want to use external disks in this case.

Backing Up to a FreeNAS System

FreeNAS®[1] is an open source Networked Attached Storage (NAS) operating system based on FreeBSD. This operating system is designed to be installed onto a USB stick or CF card so that it is kept separate from the storage disk(s) installed on the system. You can download the latest version of FreeNAS® as well as a PDF of its Users Guide from the download page[2] of the FreeNAS® website.

This section demonstrates how to configure FreeNAS® 9.2.0 as the backup server for Life Preserver to replicate to. It assumes that you have already installed this version of FreeNAS® using the installation instructions in the FreeNAS® 9.2.0 Users Guide and are able to access the FreeNAS® system from a web browser.

In order to prepare the FreeNAS® system to store the backups created by Life Preserver, you will need to create a ZFS volume, create and configure the dataset to store the backups, create a user account that has permission to access that dataset, and enable the SSH service.

In the example shown in Figure 8.21j, the system has four 1TB drives and the user has clicked Storage ➜ Volumes ➜ ZFS Volume Manager in order to create the ZFS pool.

Figure 8.21j: Creating a ZFS Volume in FreeNAS® 9.1.1

Input a "Volume Name", drag the slider to select the number of available disks, and click the "Add Volume" button. The ZFS Volume Manager will automatically select the optimal layout for both storage capacity and redundancy. In this example, a RAIDZ2 named volume1 will be created.

To create the dataset to backup to, click the + next to the entry for the newly created volume, then click "Create ZFS Dataset". In the example shown in Figure 8.21k, the "Dataset Name" is backups. Click the "Add Dataset" button to create the dataset.

Figure 8.21k: Creating a ZFS Dataset in FreeNAS® 9.1.1

To create the user account, go to Account ➜ Users ➜ Add User. In the screen shown in Figure 8.21l, input a "Username" that will match the "User Name" configured in Life Preserver. Under "Home Directory", use the browse button to browse to the location that you made to store the backups. Input a "Full Name", then input and confirm a "Password". When finished, click the "OK" button to create the user.

Figure 8.21l: Creating a User in FreeNAS® 9.1.1

Next, give the user permissions to the dataset by going to Storage ➜ Volumes, click the + next to the name of the volume, click the + next to the name of the dataset, then click "Change Permissions" for the expanded dataset. In the screen shown in Figure 8.21m, change the "Owner(user)" and "Owner(group)" to the user that you created. Click "Change" to save the change.

Figure 8.21m: Setting Permissions in FreeNAS® 9.1.1

Next, click on Shell and type the following command, replacing dru and volume1/backups with the name of the user, volume, and dataset that you created:

zfs allow -u dru create,receive,mount,userprop,destroy,send,hold volume1/backups

Click the x in the upper right corner to close Shell. Then, to enable the SSH service, go to Services ➜ Control Services, shown in Figure 8.21n.

Figure 8.21n: Start SSH in FreeNAS® 9.1.1

Click the red "OFF" button next to SSH to enable that service. Once it turns to a blue "ON", the FreeNAS® system is ready to be used as the backup server.

To finish the configuration, go to the PC-BSD® system. In the Life Preserver screen shown in Figure 8.21e, use the IP address of the FreeNAS® system in the "Host Name" field, the name of the user you created in the "User Name" field, and the name of the dataset you created (in this example it is volume1/backups) in the "Remote Dataset" field.

Restoring the Operating System From a Replicated Life Preserver Backup

If you have replicated the system's snapshots to a backup server, you can use a PC-BSD® installation media to perform an operating system restore or to clone another system. Start the installation as usual until you get to the screen shown in Figure 8.21o.

Figure 8.21o: Selecting to Restore/Clone From Backup

Before you can perform a restore, the network interface must be configured. Click the "network connectivity" icon (second from the left) in order to determine if the network connection was automatically detected. If it was not, configure the network connection before continuing.

Next, click "Restore from Life-Preserver backup" and the "Next" button. This will start the Restore Wizard. Click "Next" to see the screen shown in Figure 8.21p.

Figure 8.21p: Select the Backup Server

Input the IP address of the backup server and the name of the user account used to replicate the snapshots. If the server is listening on a non-standard SSH port, change the "SSH port" number. Click "Next" to see the screen shown in Figure 8.21q.

Figure 8.21q: Select the Authentication Method

If you previously saved the SSH key to a USB stick, insert the stick then press "Next". Otherwise, change the selection to "Use password authentication" and press "Next". The next screen will either read the USB key or prompt for the password, depending upon your selection. The wizard will then attempt a connection to the server. If it succeeds, you will be able to select which backup to restore. After making your selection, the installer will proceed to the Disk Selection Screen shown in Figure 3.4e. After selecting the disk(s), the installer will show Figure 3.4f, but the ZFS datasets will be greyed out as they will be recreated from the backup during the restore. Press "Next" then "Next" to perform the restore.



 Translator:  Please update the last element on this page, and/or locate the {{groupListHeading|group=tables}} line, replace that with {{GroupRefHeading|{{putCommon|19}}}}

Other languages:German 2% • ‎English 100%

Cite error: <ref> tags exist for a group named "Tables", but no corresponding <references group="Tables"/> tag was found
Personal tools