This article explains how to add physical disk drives to a XenServer host, so that more capacity is available for the XenServer guests.

Create Linux LVM partition

The first step is to write a new partition table on the second disk drive using the fdisk command.

# fdisk /dev/sdb

The number of cylinders for this disk is set to 182401.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      182401  1465136001   8e  Linux LVM

Add new disk to XenServer LVM

The next step is to make the new disk partition known to the LVM using the pvcreate command. The pvdisplay command lists all physical volumes associated with LVM. The first physical volume is the original XenServer LVM partition on the first disk, the second entry is our new one.

# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               VG_XenStorage-506b833c-f239-ad9a-350f-a7287ed3e259
  PV Size               1.36 TB / not usable 7.77 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              355739
  Free PE               340336
  Allocated PE          15403
  PV UUID               5bhPOM-r7J0-2cSV-2CnR-WKpI-Vzue-AJorZZ

  "/dev/sdb1" is a new physical volume of "1.36 TB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name               VG_XenStorage-f6bce4ea-0bff-74f0-7a35-55238d517bd4
  PV Size               1.36 TB / not usable 7.25 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              357696
  Free PE               132
  Allocated PE          357564
  PV UUID               2z9UKx-eTRo-I42j-O120-4F37-i5Is-edTbX8
#

Note

If the new disk already contains a LVM partition, it should be automatically recognized as a new physical volume. In this case, the pvcreate command is not necessary.

The new storage can be added to the existing local storage, or it can be used to create a new distinct local storage. The first option has the disadvantage that the local storage is now dependend on two physical harddisks. This will double the risk of a failure.

Alternative 1: Extend existing local storage

To extend the existing local storage, use the vgextend command to add the new physical volume to an existing volume group. This command needs the volume group name as a parameter, so we run the vgdisplay command first. After execution of vgextend we should see the extra storage available in the volume group. The new size will also be displayed in the Citrix XenCenter for the local storage.

# vgdisplay
  --- Volume group ---
  VG Name               VG_XenStorage-ece12464-dfb3-8c83-36dd-e88c9f2c6b65
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1223
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                27
  Open LV               26
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.36 TB
  PE Size               4.00 MB
  Total PE              355739
  Alloc PE / Size       355727 / 1.36 TB
  Free  PE / Size       12 / 48.00 MB
  VG UUID               JrIxtn-2smY-Hvav-FzXk-Un7o-G07e-fzjAqU

# vgextend VG_XenStorage-ece12464-dfb3-8c83-36dd-e88c9f2c6b65 /dev/sdb1
  Volume group "VG_XenStorage-ece12464-dfb3-8c83-36dd-e88c9f2c6b65" successfully extended

# vgdisplay
  --- Volume group ---
  VG Name               VG_XenStorage-ece12464-dfb3-8c83-36dd-e88c9f2c6b65
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1235
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                30
  Open LV               28
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               2.72 TB
  PE Size               4.00 MB
  Total PE              713438
  Alloc PE / Size       356331 / 1.36 TB
  Free  PE / Size       357107 / 1.36 TB
  VG UUID               JrIxtn-2smY-Hvav-FzXk-Un7o-G07e-fzjAqU

Note

By chaining several physical disks you increase the failure risk. If any of the drives in the volume group fails, the whole storage including your existing virtual machines fails.

Alternative 2: Create new local storage

To create a new XenServer storage, use the xe sr-create command. If this server is in a pool, we need to define the host the new storage is on, so we need to run the xe host-list command first.

# xe host-list
uuid ( RO)                : 2b9dd54b-7243-4504-a137-9a3519cb23fe
          name-label ( RW): server00
    name-description ( RO): Default install of XenServer


uuid ( RO)                : f9317306-9c3f-42fc-ad4d-3b4bd6084bab
          name-label ( RW): server01
    name-description ( RO): Default install of XenServer


uuid ( RO)                : 9e27f96c-a5b8-477c-b7f1-beac0d9c2b3f
          name-label ( RW): server02
    name-description ( RO): Default install of XenServer

xe sr-create host-uuid=f9317306-9c3f-42fc-ad4d-3b4bd6084bab shared=false type=lvm content-type=user \
   device-config:device=/dev/sdb1 name-label="Local storage 2"

01debcfa-0cbc-77af-5dae-269df31a8dfb

Warning

If your new disk already contains a LVM partition and has data in it (for example, if you were using this tutorial before, have copied data on the new storage and re-installed XenServer on the main disk), the xe sr-create command will delete all data.

Using the new storage

The new storage now shows up on the server's "Storage" tab in the XenCenter administration application. To use it, create a new virtual disk in the "Storage" tab of the virtual machine.

The new disk is now accessible as /dev/xvdb in the virtual machine. Just run fdisk /dev/xvdb and mke2fs -j /dev/xvdb1 and the new storage is ready for use.

# fdisk /dev/xvdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xc6318a63.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.


The number of cylinders for this disk is set to 181975.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/xvdb: 1496.7 GB, 1496796102656 bytes
255 heads, 63 sectors/track, 181975 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc6318a63

    Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-181975, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-181975, default 181975):
Using default value 181975

Command (m for help): p

Disk /dev/xvdb: 1496.7 GB, 1496796102656 bytes
255 heads, 63 sectors/track, 181975 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc6318a63

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdb1               1      181975  1461714156   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

# mke2fs -j /dev/xvdb1

mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
91357184 inodes, 365428539 blocks
18271426 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
11152 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

#

See also

When adding extra disk drives, consider also adding the smartmontools. The new disk drives will fail at some time in the future.
Install smartmontools on Citrix XenServer

Links to the official Citrix documentation:
http://docs.vmd.citrix.com/XenServer/5.6.0/1.0/en_gb/reference.html#storage_configuration_examples
http://docs.vmd.citrix.com/XenServer/5.6.0/1.0/en_gb/reference.html#cli-xe-commands_sr

  1. Anonymous

    Thank you very much for this Tutorial :)

    I've been looking for this for.... hours now, and this is the first one wich is NO crap :)

    Danke ;)

  2. Anonymous

    [root ~]# xe sr-create host-uuid=e9a1161e-96e4-4905-898d-04df636b489f shared=false type=lvm content-type=user device-config:device=/dev/sda2 name-label="Datastore1"

    Error code: SR_BACKEND_FAILURE_105
    Error parameters: , Root system device, cannot be used for VM storage [opterr=Device /dev/sda2 contains core system files, please use another device],

    [root ~]#

    1. You need to use a different disk (/dev/sdb) to add new storage to the system. The first disk (/dev/sda) is in use by the XenServer system.

  3. Anonymous

    Thanks for this post.  I'm just getting started with XenServer, and so far, am very happy with it.  Your posting here helped bridge the gap between my existing CentOS knowledge and using XenServer.

    My XenServer has two 1TB disks on it.  When booting the CD, I went into "shell" mode so I could use dd to clear out the beginning and end of each disk (to wipe away GUID partitions).  When I continued setup, I told it not to create any storage repositories.  Then, after it was booted, I logged into the box with SSH, changed the LVM partition to a "RAID autodetect", then used dd to copy the all the disk blocks up to and including the end of the second partition to the second disk. By running fdisk on the destination disk, I could use the 'write' command to reload the copied MBR.  Next, I used mdadm to create a RAID1 drive across the two RAID partitions.  Finally, I used pvcreate to make the RAID volume into a LVM PV and then used vgcreate to make a usable RAID-backed LVM out of it.  

    At this point, I was able to use your command with the newly-mirrored LVM to create robust storage.

    Because I copied the disk blocks between the disks starting at block 0, I should be able to boot the second disk if the first fails (since the boot blocks would have also been copied).  I don't know what's going to happen to configuration changes though... it would be great if I could put the config on the LVM.  

  4. Anonymous

    I'm with a machine testing XenServer, and I already installed a VM with all my production software. 

    I've already exported this VM and imported in another XenServer machine for tests, while I don't change the OS for my production server.

    I was thinking if it's possible to use the actual disk and install it to a recent installed XenServer and see the VMs I have in this disk ?

    With fdisk -l I can see the disk (obvisouly), but pvs don't show the disk, how can I take all the VMs from this disk ?

    Did I make myself clear ? 

    1. Anonymous

      Ok, did a vgscan, and it found the LVM from the other machine.

    2. Anonymous

      Solved it with a "xe sr-introduce" and "xe pbd-create".

  5. Anonymous

    Great Tutorial! Thank you very much (smile)

  6. Anonymous

    How does option #2 effect snapshots?  

    I have been using direct access (http://wiki.xen.org/wiki/XCP_DirectDiskAccess) instead of adding an LVM volume to my drives... but using direct disk access removes my ability to take snapshots.

     

    1. I am using both type=lvm and type=ext volumes and can create snapshots just fine. Not sure what the advantage of the method described in above link is though.

  7. Anonymous

    Arne,

    Thanks for your response.  The direct disk access (from the link above) has a couple of benefits:

    • its simpler - LVM isn't that hard, but its not as simple as accessing a drive straight up
    • i can easily pull the drive and read it from any other computer/VM
    • its faster - I did a simple DD benchmark on an older spare drive ... ~50MB write for LVM, ~75MB write for direct disk access... this is writing to the drive from within the same VM.
    • xencenter interface clutter - If I add 6 separate drives to a VM with your above method, I have 6 extra drives showing up in my main xencenter "tree" in the interface.  With direct disk access, they show up as USB drives and are only in the "storage" tab.  This is minor, but still.