1=========================================
2IBM Storwize family and SVC volume driver
3=========================================
4
5The volume management driver for Storwize family and SAN Volume
6Controller (SVC) provides OpenStack Compute instances with access to IBM
7Storwize family or SVC storage systems.
8
9Supported operations
10~~~~~~~~~~~~~~~~~~~~
11
12Storwize/SVC driver supports the following Block Storage service volume
13operations:
14
15-  Create, list, delete, attach (map), and detach (unmap) volumes.
16-  Create, list, and delete volume snapshots.
17-  Copy an image to a volume.
18-  Copy a volume to an image.
19-  Clone a volume.
20-  Extend a volume.
21-  Retype a volume.
22-  Create a volume from a snapshot.
23-  Create, list, and delete consistency group.
24-  Create, list, and delete consistency group snapshot.
25-  Modify consistency group (add or remove volumes).
26-  Create consistency group from source (source can be a CG or CG snapshot)
27-  Manage an existing volume.
28-  Failover-host for replicated back ends.
29-  Failback-host for replicated back ends.
30-  Create, list, and delete replication group.
31-  Enable, disable replication group.
32-  Failover, failback replication group.
33
34Configure the Storwize family and SVC system
35~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
36
37Network configuration
38---------------------
39
40The Storwize family or SVC system must be configured for iSCSI, Fibre
41Channel, or both.
42
43If using iSCSI, each Storwize family or SVC node should have at least
44one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP
45address associated with the volume's preferred node (if available) to
46attach the volume to the instance, otherwise it uses the first available
47iSCSI IP address of the system. The driver obtains the iSCSI IP address
48directly from the storage system. You do not need to provide these iSCSI
49IP addresses directly to the driver.
50
51.. note::
52
53   If using iSCSI, ensure that the compute nodes have iSCSI network
54   access to the Storwize family or SVC system.
55
56If using Fibre Channel (FC), each Storwize family or SVC node should
57have at least one WWPN port configured. The driver uses all available
58WWPNs to attach the volume to the instance. The driver obtains the
59WWPNs directly from the storage system. You do not need to provide
60these WWPNs directly to the driver.
61
62.. note::
63
64   If using FC, ensure that the compute nodes have FC connectivity to
65   the Storwize family or SVC system.
66
67iSCSI CHAP authentication
68-------------------------
69
70If using iSCSI for data access and the
71``storwize_svc_iscsi_chap_enabled`` is set to ``True``, the driver will
72associate randomly-generated CHAP secrets with all hosts on the Storwize
73family system. The compute nodes use these secrets when creating
74iSCSI connections.
75
76.. warning::
77
78   CHAP secrets are added to existing hosts as well as newly-created
79   ones. If the CHAP option is enabled, hosts will not be able to
80   access the storage without the generated secrets.
81
82.. note::
83
84   Not all OpenStack Compute drivers support CHAP authentication.
85   Please check compatibility before using.
86
87.. note::
88
89   CHAP secrets are passed from OpenStack Block Storage to Compute in
90   clear text. This communication should be secured to ensure that CHAP
91   secrets are not discovered.
92
93Configure storage pools
94-----------------------
95
96The IBM Storwize/SVC driver can allocate volumes in multiple pools.
97The pools should be created in advance and be provided to the driver
98using the ``storwize_svc_volpool_name`` configuration flag in the form
99of a comma-separated list.
100For the complete list of configuration flags, see :ref:`config_flags`.
101
102Configure user authentication for the driver
103--------------------------------------------
104
105The driver requires access to the Storwize family or SVC system
106management interface. The driver communicates with the management using
107SSH. The driver should be provided with the Storwize family or SVC
108management IP using the ``san_ip`` flag, and the management port should
109be provided by the ``san_ssh_port`` flag. By default, the port value is
110configured to be port 22 (SSH). Also, you can set the secondary
111management IP using the ``storwize_san_secondary_ip`` flag.
112
113.. note::
114
115   Make sure the compute node running the cinder-volume management
116   driver has SSH network access to the storage system.
117
118To allow the driver to communicate with the Storwize family or SVC
119system, you must provide the driver with a user on the storage system.
120The driver has two authentication methods: password-based authentication
121and SSH key pair authentication. The user should have an Administrator
122role. It is suggested to create a new user for the management driver.
123Please consult with your storage and security administrator regarding
124the preferred authentication method and how passwords or SSH keys should
125be stored in a secure manner.
126
127.. note::
128
129   When creating a new user on the Storwize or SVC system, make sure
130   the user belongs to the Administrator group or to another group that
131   has an Administrator role.
132
133If using password authentication, assign a password to the user on the
134Storwize or SVC system. The driver configuration flags for the user and
135password are ``san_login`` and ``san_password``, respectively.
136
137If you are using the SSH key pair authentication, create SSH private and
138public keys using the instructions below or by any other method.
139Associate the public key with the user by uploading the public key:
140select the :guilabel:`choose file` option in the Storwize family or SVC
141management GUI under :guilabel:`SSH public key`. Alternatively, you may
142associate the SSH public key using the command-line interface; details can
143be found in the Storwize and SVC documentation. The private key should be
144provided to the driver using the ``san_private_key`` configuration flag.
145
146Create a SSH key pair with OpenSSH
147----------------------------------
148
149You can create an SSH key pair using OpenSSH, by running:
150
151.. code-block:: console
152
153   $ ssh-keygen -t rsa
154
155The command prompts for a file to save the key pair. For example, if you
156select ``key`` as the filename, two files are created: ``key`` and
157``key.pub``. The ``key`` file holds the private SSH key and ``key.pub``
158holds the public SSH key.
159
160The command also prompts for a pass phrase, which should be empty.
161
162The private key file should be provided to the driver using the
163``san_private_key`` configuration flag. The public key should be
164uploaded to the Storwize family or SVC system using the storage
165management GUI or command-line interface.
166
167.. note::
168
169   Ensure that Cinder has read permissions on the private key file.
170
171Configure the Storwize family and SVC driver
172~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
173
174Enable the Storwize family and SVC driver
175-----------------------------------------
176
177Set the volume driver to the Storwize family and SVC driver by setting
178the ``volume_driver`` option in the ``cinder.conf`` file as follows:
179
180iSCSI:
181
182.. code-block:: ini
183
184   [svc1234]
185   volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver
186   san_ip = 1.2.3.4
187   san_login = superuser
188   san_password = passw0rd
189   storwize_svc_volpool_name = cinder_pool1
190   volume_backend_name = svc1234
191
192FC:
193
194.. code-block:: ini
195
196   [svc1234]
197   volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
198   san_ip = 1.2.3.4
199   san_login = superuser
200   san_password = passw0rd
201   storwize_svc_volpool_name = cinder_pool1
202   volume_backend_name = svc1234
203
204Replication configuration
205-------------------------
206
207Add the following to the back-end specification to specify another storage
208to replicate to:
209
210.. code-block:: ini
211
212   replication_device = backend_id:rep_svc,
213                        san_ip:1.2.3.5,
214                        san_login:superuser,
215                        san_password:passw0rd,
216                        pool_name:cinder_pool1
217
218The ``backend_id`` is a unique name of the remote storage, the ``san_ip``,
219``san_login``, and ``san_password`` is authentication information for the
220remote storage. The ``pool_name`` is the pool name for the replication
221target volume.
222
223.. note::
224
225   Only one ``replication_device`` can be configured for one back end
226   storage since only one replication target is supported now.
227
228.. _config_flags:
229
230Storwize family and SVC driver options in cinder.conf
231-----------------------------------------------------
232
233The following options specify default values for all volumes. Some can
234be over-ridden using volume types, which are described below.
235
236.. include:: ../../tables/cinder-storwize.inc
237
238Note the following:
239
240* The authentication requires either a password (``san_password``) or
241  SSH private key (``san_private_key``). One must be specified. If
242  both are specified, the driver uses only the SSH private key.
243
244* The driver creates thin-provisioned volumes by default. The
245  ``storwize_svc_vol_rsize`` flag defines the initial physical
246  allocation percentage for thin-provisioned volumes, or if set to
247  ``-1``, the driver creates full allocated volumes. More details about
248  the available options are available in the Storwize family and SVC
249  documentation.
250
251
252Placement with volume types
253---------------------------
254
255The IBM Storwize/SVC driver exposes capabilities that can be added to
256the ``extra specs`` of volume types, and used by the filter
257scheduler to determine placement of new volumes. Make sure to prefix
258these keys with ``capabilities:`` to indicate that the scheduler should
259use them. The following ``extra specs`` are supported:
260
261-  ``capabilities:volume_backend_name`` - Specify a specific back-end
262   where the volume should be created. The back-end name is a
263   concatenation of the name of the IBM Storwize/SVC storage system as
264   shown in ``lssystem``, an underscore, and the name of the pool (mdisk
265   group). For example:
266
267   .. code-block:: ini
268
269      capabilities:volume_backend_name=myV7000_openstackpool
270
271-  ``capabilities:compression_support`` - Specify a back-end according to
272   compression support. A value of ``True`` should be used to request a
273   back-end that supports compression, and a value of ``False`` will
274   request a back-end that does not support compression. If you do not
275   have constraints on compression support, do not set this key. Note
276   that specifying ``True`` does not enable compression; it only
277   requests that the volume be placed on a back-end that supports
278   compression. Example syntax:
279
280   .. code-block:: ini
281
282      capabilities:compression_support='<is> True'
283
284-  ``capabilities:easytier_support`` - Similar semantics as the
285   ``compression_support`` key, but for specifying according to support
286   of the Easy Tier feature. Example syntax:
287
288   .. code-block:: ini
289
290      capabilities:easytier_support='<is> True'
291
292-  ``capabilities:pool_name`` - Specify a specific pool to create volume
293   if only multiple pools are configured. pool_name should be one value
294   configured in storwize_svc_volpool_name flag. Example syntax:
295
296   .. code-block:: ini
297
298      capabilities:pool_name=cinder_pool2
299
300Configure per-volume creation options
301-------------------------------------
302
303Volume types can also be used to pass options to the IBM Storwize/SVC
304driver, which over-ride the default values set in the configuration
305file. Contrary to the previous examples where the ``capabilities`` scope
306was used to pass parameters to the Cinder scheduler, options can be
307passed to the IBM Storwize/SVC driver with the ``drivers`` scope.
308
309The following ``extra specs`` keys are supported by the IBM Storwize/SVC
310driver:
311
312- rsize
313- warning
314- autoexpand
315- grainsize
316- compression
317- easytier
318- multipath
319- iogrp
320- mirror_pool
321- volume_topology
322- peer_pool
323- host_site
324
325These keys have the same semantics as their counterparts in the
326configuration file. They are set similarly; for example, ``rsize=2`` or
327``compression=False``.
328
329Example: Volume types
330---------------------
331
332In the following example, we create a volume type to specify a
333controller that supports compression, and enable compression:
334
335.. code-block:: console
336
337   $ openstack volume type create compressed
338   $ openstack volume type set --property capabilities:compression_support='<is> True' --property drivers:compression=True compressed
339
340We can then create a 50GB volume using this type:
341
342.. code-block:: console
343
344   $ openstack volume create "compressed volume" --type compressed --size 50
345
346In the following example, create a volume type that enables
347synchronous replication (metro mirror):
348
349.. code-block:: console
350
351   $ openstack volume type create ReplicationType
352   $ openstack volume type set --property replication_type="<in> metro" \
353     --property replication_enabled='<is> True' --property volume_backend_name=svc234 ReplicationType
354
355In the following example, we create a volume type to support stretch cluster
356volume or mirror volume:
357
358.. code-block:: console
359
360   $ openstack volume type create mirror_vol_type
361   $ openstack volume type set --property volume_backend_name=svc1 \
362     --property drivers:mirror_pool=pool2 mirror_vol_type
363
364Volume types can be used, for example, to provide users with different
365
366-  performance levels (such as, allocating entirely on an HDD tier,
367   using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD
368   tier)
369
370-  resiliency levels (such as, allocating volumes in pools with
371   different RAID levels)
372
373-  features (such as, enabling/disabling Real-time Compression,
374   replication volume creation)
375
376QOS
377---
378
379The Storwize driver provides QOS support for storage volumes by
380controlling the I/O amount. QOS is enabled by editing the
381``etc/cinder/cinder.conf`` file and setting the
382``storwize_svc_allow_tenant_qos`` to ``True``.
383
384There are three ways to set the Storwize ``IOThrotting`` parameter for
385storage volumes:
386
387-  Add the ``qos:IOThrottling`` key into a QOS specification and
388   associate it with a volume type.
389
390-  Add the ``qos:IOThrottling`` key into an extra specification with a
391   volume type.
392
393-  Add the ``qos:IOThrottling`` key to the storage volume metadata.
394
395.. note::
396
397   If you are changing a volume type with QOS to a new volume type
398   without QOS, the QOS configuration settings will be removed.
399
400Operational notes for the Storwize family and SVC driver
401~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
402
403Migrate volumes
404---------------
405
406In the context of OpenStack Block Storage's volume migration feature,
407the IBM Storwize/SVC driver enables the storage's virtualization
408technology. When migrating a volume from one pool to another, the volume
409will appear in the destination pool almost immediately, while the
410storage moves the data in the background.
411
412.. note::
413
414   To enable this feature, both pools involved in a given volume
415   migration must have the same values for ``extent_size``. If the
416   pools have different values for ``extent_size``, the data will still
417   be moved directly between the pools (not host-side copy), but the
418   operation will be synchronous.
419
420Extend volumes
421--------------
422
423The IBM Storwize/SVC driver allows for extending a volume's size, but
424only for volumes without snapshots.
425
426Snapshots and clones
427--------------------
428
429Snapshots are implemented using FlashCopy with no background copy
430(space-efficient). Volume clones (volumes created from existing volumes)
431are implemented with FlashCopy, but with background copy enabled. This
432means that volume clones are independent, full copies. While this
433background copy is taking place, attempting to delete or extend the
434source volume will result in that operation waiting for the copy to
435complete.
436
437Volume retype
438-------------
439
440The IBM Storwize/SVC driver enables you to modify volume types. When you
441modify volume types, you can also change these extra specs properties:
442
443-  rsize
444
445-  warning
446
447-  autoexpand
448
449-  grainsize
450
451-  compression
452
453-  easytier
454
455-  iogrp
456
457-  nofmtdisk
458
459-  mirror_pool
460
461-  volume_topology
462
463-  peer_pool
464
465-  host_site
466
467.. note::
468
469   When you change the ``rsize``, ``grainsize`` or ``compression``
470   properties, volume copies are asynchronously synchronized on the
471   array.
472
473.. note::
474
475   To change the ``iogrp`` property, IBM Storwize/SVC firmware version
476   6.4.0 or later is required.
477
478Replication operation
479---------------------
480
481Configure replication in volume type
482<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
483
484A volume is only replicated if the volume is created with a volume-type
485that has the extra spec ``replication_enabled`` set to ``<is> True``. Three
486types of replication are supported now, global mirror(async), global mirror
487with change volume(async) and metro mirror(sync). It can be specified by a
488volume-type that has the extra spec ``replication_type`` set to
489``<in> global``, ``<in> gmcv`` or ``<in> metro``. If no ``replication_type``
490is specified, global mirror will be created for replication.
491
492If ``replication_type`` set to ``<in> gmcv``, cycle_period_seconds can be
493set as the cycling time perform global mirror relationship with multi cycling
494mode. Default value is 300. Example syntax:
495
496.. code-block:: console
497
498   $ cinder type-create gmcv_type
499   $ cinder type-key gmcv_type set replication_enabled='<is> True' \
500     replication_type="<in> gmcv" drivers:cycle_period_seconds=500
501
502.. note::
503
504   It is better to establish the partnership relationship between
505   the replication source storage and the replication target
506   storage manually on the storage back end before replication
507   volume creation.
508
509Failover host
510<<<<<<<<<<<<<
511
512The ``failover-host`` command is designed for the case where the primary
513storage is down.
514
515.. code-block:: console
516
517   $ cinder failover-host cinder@svciscsi --backend_id target_svc_id
518
519If a failover command has been executed and the primary storage has
520been restored, it is possible to do a failback by simply specifying
521default as the ``backend_id``:
522
523.. code-block:: console
524
525   $ cinder failover-host cinder@svciscsi --backend_id default
526
527.. note::
528
529   Before you perform a failback operation, synchronize the data
530   from the replication target volume to the primary one on the
531   storage back end manually, and do the failback only after the
532   synchronization is done since the synchronization may take a long time.
533   If the synchronization is not done manually, Storwize Block Storage
534   service driver will perform the synchronization and do the failback
535   after the synchronization is finished.
536
537Replication group
538<<<<<<<<<<<<<<<<<
539
540Before creating replication group, a group-spec which key
541``consistent_group_replication_enabled`` set to ``<is> True`` should be
542set in group type. Volume type used to create group must be replication
543enabled, and its ``replication_type`` should be set either ``<in> global``
544or ``<in> metro``. The "failover_group" api allows group to be failed over
545and back without failing over the entire host. Example syntax:
546
547- Create replication group
548
549.. code-block:: console
550
551   $ cinder group-type-create rep-group-type-example
552   $ cinder group-type-key rep-group-type-example set consistent_group_replication_enabled='<is> True'
553   $ cinder type-create type-global
554   $ cinder type-key type-global set replication_enabled='<is> True' replication_type='<in> global'
555   $ cinder group-create rep-group-type-example type-global --name global-group
556
557- Failover replication group
558
559.. code-block:: console
560
561   $ cinder group-failover-replication --secondary-backend-id target_svc_id group_id
562
563- Failback replication group
564
565.. code-block:: console
566
567   $ cinder group-failover-replication --secondary-backend-id default group_id
568
569.. note::
570
571   Option allow-attached-volume can be used to failover the in-use volume, but
572   fail over/back an in-use volume is not recommended. If the user does failover
573   operation to an in-use volume, the volume status remains in-use after
574   failover. But the in-use replication volume would change to read-only since
575   the primary volume is changed to auxiliary side and the instance is still
576   attached to the master volume. As a result please detach the replication
577   volume first and attach again if user want to reuse the in-use replication
578   volume as read-write.
579
580Hyperswap Volumes
581-----------------
582
583A hyperswap volume is created with a volume-type that has the extra spec
584``drivers:volume_topology`` set to ``hyperswap``.
585To support hyperswap volumes, IBM Storwize/SVC firmware version 7.6.0 or
586later is required.
587Add the following to the back-end configuration to specify the host preferred
588site for hyperswap volume.
589FC:
590
591.. code-block:: ini
592
593   storwize_preferred_host_site = site1:20000090fa17311e&ff00000000000001,
594                                  site2:20000089762sedce&ff00000000000000
595
596iSCSI:
597
598.. code-block:: ini
599
600   storwize_preferred_host_site = site1:iqn.1993-08.org.debian:01:eac5ccc1aaa&iqn.1993-08.org.debian:01:be53b7e236be,
601                                  site2:iqn.1993-08.org.debian:01:eac5ccc1bbb&iqn.1993-08.org.debian:01:abcdefg9876w
602
603The site1 and site2 are names of the two host sites used in Storwize
604storage. The WWPNs and IQNs are the connectors used for host mapping in
605Storwize.
606
607.. code-block:: console
608
609   $ cinder type-create hyper_type
610   $ cinder type-key hyper_type set drivers:volume_topology=hyperswap \
611     drivers:peer_pool=Pool_site2
612
613.. note::
614
615   The property ``rsize`` is considered as ``buffersize`` for hyperswap
616   volume.
617   The hyperswap property ``iogrp`` is selected by storage.
618
619A group is created as a hyperswap group with a group-type that has the
620group spec ``hyperswap_group_enabled`` set to ``<is> True``.
621