1===================
2Dell EMC VNX driver
3===================
4
5EMC VNX driver interacts with configured VNX array. It supports
6both iSCSI and FC protocol.
7
8The VNX cinder driver performs the volume operations by
9executing Navisphere CLI (NaviSecCLI) which is a command-line interface used
10for management, diagnostics, and reporting functions for VNX. It also
11supports both iSCSI and FC protocol.
12
13
14System requirements
15~~~~~~~~~~~~~~~~~~~
16
17- VNX Operational Environment for Block version 5.32 or higher.
18- VNX Snapshot and Thin Provisioning license should be activated for VNX.
19- Python library ``storops`` version 0.5.7 or higher to interact with VNX.
20- Navisphere CLI v7.32 or higher is installed along with the driver.
21
22Supported operations
23~~~~~~~~~~~~~~~~~~~~
24
25- Create, delete, attach, and detach volumes.
26- Create, list, and delete volume snapshots.
27- Create a volume from a snapshot.
28- Copy an image to a volume.
29- Clone a volume.
30- Extend a volume.
31- Migrate a volume.
32- Retype a volume.
33- Get volume statistics.
34- Create and delete consistency groups.
35- Create, list, and delete consistency group snapshots.
36- Modify consistency groups.
37- Efficient non-disruptive volume backup.
38- Create a cloned consistency group.
39- Create a consistency group from consistency group snapshots.
40- Replication v2.1 support.
41- Generic Group support.
42
43Preparation
44~~~~~~~~~~~
45
46This section contains instructions to prepare the Block Storage nodes to
47use the EMC VNX driver. You should install the Navisphere CLI and ensure you
48have correct zoning configurations.
49
50Install Navisphere CLI
51----------------------
52
53Navisphere CLI needs to be installed on all Block Storage nodes within
54an OpenStack deployment. You need to download different versions for
55different platforms:
56
57-  For Ubuntu x64, DEB is available at `EMC OpenStack
58   Github <https://github.com/emc-openstack/naviseccli>`_.
59
60-  For all other variants of Linux, Navisphere CLI is available at
61   `Downloads for VNX2
62   Series <https://support.emc.com/downloads/36656_VNX2-Series>`_ or
63   `Downloads for VNX1
64   Series <https://support.emc.com/downloads/12781_VNX1-Series>`_.
65
66Install Python library storops
67------------------------------
68
69``storops`` is a Python library that interacts with VNX array through
70Navisphere CLI.
71Use the following command to install the ``storops`` library:
72
73.. code-block:: console
74
75   $ pip install storops
76
77
78Check array software
79--------------------
80
81Make sure your have the following software installed for certain features:
82
83+--------------------------------------------+---------------------+
84| Feature                                    | Software Required   |
85+============================================+=====================+
86| All                                        | ThinProvisioning    |
87+--------------------------------------------+---------------------+
88| All                                        | VNXSnapshots        |
89+--------------------------------------------+---------------------+
90| FAST cache support                         | FASTCache           |
91+--------------------------------------------+---------------------+
92| Create volume with type ``compressed``     | Compression         |
93+--------------------------------------------+---------------------+
94| Create volume with type ``deduplicated``   | Deduplication       |
95+--------------------------------------------+---------------------+
96
97**Required software**
98
99You can check the status of your array software in the :guilabel:`Software`
100page of :guilabel:`Storage System Properties`. Here is how it looks like:
101
102.. figure:: ../../figures/emc-enabler.png
103
104Network configuration
105---------------------
106
107For the FC Driver, FC zoning is properly configured between the hosts and
108the VNX. Check :ref:`register-fc-port-with-vnx` for reference.
109
110For the iSCSI Driver, make sure your VNX iSCSI port is accessible by
111your hosts. Check :ref:`register-iscsi-port-with-vnx` for reference.
112
113You can use ``initiator_auto_registration = True`` configuration to avoid
114registering the ports manually. Check the detail of the configuration in
115:ref:`emc-vnx-conf` for reference.
116
117If you are trying to setup multipath, refer to :ref:`multipath-setup`.
118
119
120.. _emc-vnx-conf:
121
122Back-end configuration
123~~~~~~~~~~~~~~~~~~~~~~
124
125
126Make the following changes in the ``/etc/cinder/cinder.conf`` file.
127
128Minimum configuration
129---------------------
130
131Here is a sample of minimum back-end configuration. See the following sections
132for the detail of each option.
133Set ``storage_protocol = iscsi`` if iSCSI protocol is used.
134
135.. code-block:: ini
136
137   [DEFAULT]
138   enabled_backends = vnx_array1
139
140   [vnx_array1]
141   san_ip = 10.10.72.41
142   san_login = sysadmin
143   san_password = sysadmin
144   naviseccli_path = /opt/Navisphere/bin/naviseccli
145   volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver
146   initiator_auto_registration = True
147   storage_protocol = fc
148
149Multiple back-end configuration
150-------------------------------
151Here is a sample of a minimum back-end configuration. See following sections
152for the detail of each option.
153Set ``storage_protocol = iscsi`` if iSCSI protocol is used.
154
155.. code-block:: ini
156
157   [DEFAULT]
158   enabled_backends = backendA, backendB
159
160   [backendA]
161   storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
162   san_ip = 10.10.72.41
163   storage_vnx_security_file_dir = /etc/secfile/array1
164   naviseccli_path = /opt/Navisphere/bin/naviseccli
165   volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver
166   initiator_auto_registration = True
167   storage_protocol = fc
168
169   [backendB]
170   storage_vnx_pool_names = Pool_02_SAS
171   san_ip = 10.10.26.101
172   san_login = username
173   san_password = password
174   naviseccli_path = /opt/Navisphere/bin/naviseccli
175   volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver
176   initiator_auto_registration = True
177   storage_protocol = fc
178
179The value of the option ``storage_protocol`` can be either ``fc`` or ``iscsi``,
180which is case insensitive.
181
182For more details on multiple back ends, see `Configure multiple-storage
183back ends <https://docs.openstack.org/cinder/latest/admin/blockstorage-multi-backend.html>`_
184
185Required configurations
186-----------------------
187
188**IP of the VNX Storage Processors**
189
190Specify SP A or SP B IP to connect:
191
192.. code-block:: ini
193
194   san_ip = <IP of VNX Storage Processor>
195
196**VNX login credentials**
197
198There are two ways to specify the credentials.
199
200-  Use plain text username and password.
201
202   Supply for plain username and password:
203
204   .. code-block:: ini
205
206      san_login = <VNX account with administrator role>
207      san_password = <password for VNX account>
208      storage_vnx_authentication_type = global
209
210   Valid values for ``storage_vnx_authentication_type`` are: ``global``
211   (default), ``local``, and ``ldap``.
212
213-  Use Security file.
214
215   This approach avoids the plain text password in your cinder
216   configuration file. Supply a security file as below:
217
218   .. code-block:: ini
219
220      storage_vnx_security_file_dir = <path to security file>
221
222Check Unisphere CLI user guide or :ref:`authenticate-by-security-file`
223for how to create a security file.
224
225**Path to your Unisphere CLI**
226
227Specify the absolute path to your naviseccli:
228
229.. code-block:: ini
230
231   naviseccli_path = /opt/Navisphere/bin/naviseccli
232
233**Driver's storage protocol**
234
235-  For the FC Driver, add the following option:
236
237   .. code-block:: ini
238
239      volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver
240      storage_protocol = fc
241
242-  For iSCSI Driver, add the following option:
243
244   .. code-block:: ini
245
246      volume_driver = cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver
247      storage_protocol = iscsi
248
249Optional configurations
250~~~~~~~~~~~~~~~~~~~~~~~
251
252VNX pool names
253--------------
254
255Specify the list of pools to be managed, separated by commas. They should
256already exist in VNX.
257
258.. code-block:: ini
259
260   storage_vnx_pool_names = pool 1, pool 2
261
262If this value is not specified, all pools of the array will be used.
263
264**Initiator auto registration**
265
266When ``initiator_auto_registration`` is set to ``True``, the driver will
267automatically register initiators to all working target ports of the VNX array
268during volume attaching (The driver will skip those initiators that have
269already been registered) if the option ``io_port_list`` is not specified in
270the ``cinder.conf`` file.
271
272If the user wants to register the initiators with some specific ports but not
273register with the other ports, this functionality should be disabled.
274
275When a comma-separated list is given to ``io_port_list``, the driver will only
276register the initiator to the ports specified in the list and only return
277target port(s) which belong to the target ports in the ``io_port_list`` instead
278of all target ports.
279
280-  Example for FC ports:
281
282   .. code-block:: ini
283
284      io_port_list = a-1,B-3
285
286   ``a`` or ``B`` is *Storage Processor*, number ``1`` and ``3`` are
287   *Port ID*.
288
289-  Example for iSCSI ports:
290
291   .. code-block:: ini
292
293      io_port_list = a-1-0,B-3-0
294
295   ``a`` or ``B`` is *Storage Processor*, the first numbers ``1`` and ``3`` are
296   *Port ID* and the second number ``0`` is *Virtual Port ID*
297
298.. note::
299
300   -  Rather than de-registered, the registered ports will be simply
301      bypassed whatever they are in ``io_port_list`` or not.
302
303   -  The driver will raise an exception if ports in ``io_port_list``
304      do not exist in VNX during startup.
305
306Force delete volumes in storage group
307-------------------------------------
308
309Some ``available`` volumes may remain in storage group on the VNX array due to
310some OpenStack timeout issue. But the VNX array do not allow the user to delete
311the volumes which are in storage group. Option
312``force_delete_lun_in_storagegroup`` is introduced to allow the user to delete
313the ``available`` volumes in this tricky situation.
314
315When ``force_delete_lun_in_storagegroup`` is set to ``True`` in the back-end
316section, the driver will move the volumes out of the storage groups and then
317delete them if the user tries to delete the volumes that remain in the storage
318group on the VNX array.
319
320The default value of ``force_delete_lun_in_storagegroup`` is ``False``.
321
322Over subscription in thin provisioning
323--------------------------------------
324
325Over subscription allows that the sum of all volume's capacity (provisioned
326capacity) to be larger than the pool's total capacity.
327
328``max_over_subscription_ratio`` in the back-end section is the ratio of
329provisioned capacity over total capacity.
330
331The default value of ``max_over_subscription_ratio`` is 20.0, which means
332the provisioned capacity can be 20 times of the total capacity.
333If the value of this ratio is set larger than 1.0, the provisioned
334capacity can exceed the total capacity.
335
336Storage group automatic deletion
337--------------------------------
338
339For volume attaching, the driver has a storage group on VNX for each compute
340node hosting the vm instances which are going to consume VNX Block Storage
341(using compute node's host name as storage group's name).  All the volumes
342attached to the VM instances in a compute node will be put into the storage
343group. If ``destroy_empty_storage_group`` is set to ``True``, the driver will
344remove the empty storage group after its last volume is detached. For data
345safety, it does not suggest to set ``destroy_empty_storage_group=True`` unless
346the VNX is exclusively managed by one Block Storage node because consistent
347``lock_path`` is required for operation synchronization for this behavior.
348
349Initiator auto deregistration
350-----------------------------
351
352Enabling storage group automatic deletion is the precondition of this function.
353If ``initiator_auto_deregistration`` is set to ``True`` is set, the driver will
354deregister all FC and iSCSI initiators of the host after its storage group is
355deleted.
356
357FC SAN auto zoning
358------------------
359
360The EMC VNX driver supports FC SAN auto zoning when ``ZoneManager`` is
361configured and ``zoning_mode`` is set to ``fabric`` in ``cinder.conf``.
362For ZoneManager configuration, refer to :doc:`../fc-zoning`.
363
364Volume number threshold
365-----------------------
366
367In VNX, there is a limitation on the number of pool volumes that can be created
368in the system. When the limitation is reached, no more pool volumes can be
369created even if there is remaining capacity in the storage pool. In other
370words, if the scheduler dispatches a volume creation request to a back end that
371has free capacity but reaches the volume limitation, the creation fails.
372
373The default value of ``check_max_pool_luns_threshold`` is ``False``.  When
374``check_max_pool_luns_threshold=True``, the pool-based back end will check the
375limit and will report 0 free capacity to the scheduler if the limit is reached.
376So the scheduler will be able to skip this kind of pool-based back end that
377runs out of the pool volume number.
378
379iSCSI initiators
380----------------
381
382``iscsi_initiators`` is a dictionary of IP addresses of the iSCSI
383initiator ports on OpenStack compute and block storage nodes which want to
384connect to VNX via iSCSI. If this option is configured, the driver will
385leverage this information to find an accessible iSCSI target portal for the
386initiator when attaching volume. Otherwise, the iSCSI target portal will be
387chosen in a relative random way.
388
389.. note::
390
391   This option is only valid for iSCSI driver.
392
393Here is an example. VNX will connect ``host1`` with ``10.0.0.1`` and
394``10.0.0.2``. And it will connect ``host2`` with ``10.0.0.3``.
395
396The key name (``host1`` in the example) should be the output of
397:command:`hostname` command.
398
399.. code-block:: ini
400
401   iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
402
403Default timeout
404---------------
405
406Specify the timeout in minutes for operations like LUN migration, LUN creation,
407etc. For example, LUN migration is a typical long running operation, which
408depends on the LUN size and the load of the array. An upper bound in the
409specific deployment can be set to avoid unnecessary long wait.
410
411The default value for this option is ``infinite``.
412
413.. code-block:: ini
414
415   default_timeout = 60
416
417Max LUNs per storage group
418--------------------------
419
420The ``max_luns_per_storage_group`` specify the maximum number of LUNs in a
421storage group. Default value is 255. It is also the maximum value supported by
422VNX.
423
424Ignore pool full threshold
425--------------------------
426
427If ``ignore_pool_full_threshold`` is set to ``True``, driver will force LUN
428creation even if the full threshold of pool is reached. Default to ``False``.
429
430Extra spec options
431~~~~~~~~~~~~~~~~~~
432
433Extra specs are used in volume types created in Block Storage as the preferred
434property of the volume.
435
436The Block Storage scheduler will use extra specs to find the suitable back end
437for the volume and the Block Storage driver will create the volume based on the
438properties specified by the extra spec.
439
440Use the following command to create a volume type:
441
442.. code-block:: console
443
444   $ openstack volume type create demoVolumeType
445
446Use the following command to update the extra spec of a volume type:
447
448.. code-block:: console
449
450   $ openstack volume type set --property provisioning:type=thin thick_provisioning_support='<is> True' demoVolumeType
451
452The following sections describe the VNX extra keys.
453
454Provisioning type
455-----------------
456
457-  Key: ``provisioning:type``
458
459-  Possible Values:
460
461   -  ``thick``
462
463      Volume is fully provisioned.
464
465      Run the following commands to create a ``thick`` volume type:
466
467      .. code-block:: console
468
469         $ openstack volume type create ThickVolumeType
470         $ openstack volume type set --property provisioning:type=thick thick_provisioning_support='<is> True' ThickVolumeType
471
472   -  ``thin``
473
474      Volume is virtually provisioned.
475
476      Run the following commands to create a ``thin`` volume type:
477
478      .. code-block:: console
479
480         $ openstack volume type create ThinVolumeType
481         $ openstack volume type set --property provisioning:type=thin thin_provisioning_support='<is> True' ThinVolumeType
482
483   -  ``deduplicated``
484
485      Volume is ``thin`` and deduplication is enabled. The administrator shall
486      go to VNX to configure the system level deduplication settings. To
487      create a deduplicated volume, the VNX Deduplication license must be
488      activated on VNX, and specify ``deduplication_support=True`` to let Block
489      Storage scheduler find the proper volume back end.
490
491      Run the following commands to create a ``deduplicated`` volume type:
492
493      .. code-block:: console
494
495         $ openstack volume type create DeduplicatedVolumeType
496         $ openstack volume type set --property provisioning:type=deduplicated deduplicated_support='<is> True' DeduplicatedVolumeType
497
498   -  ``compressed``
499
500      Volume is ``thin`` and compression is enabled. The administrator shall go
501      to the VNX to configure the system level compression settings. To create
502      a compressed volume, the VNX Compression license must be activated on
503      VNX, and use ``compression_support=True`` to let Block Storage scheduler
504      find a volume back end. VNX does not support creating snapshots on a
505      compressed volume.
506
507      Run the following commands to create a ``compressed`` volume type:
508
509      .. code-block:: console
510
511         $ openstack volume type create CompressedVolumeType
512         $ openstack volume type set --property provisioning:type=compressed compression_support='<is> True' CompressedVolumeType
513
514-  Default: ``thick``
515
516.. note::
517
518   ``provisioning:type`` replaces the old spec key ``storagetype:provisioning``.
519   The latter one is obsolete since the *Mitaka* release.
520
521Storage tiering support
522-----------------------
523
524- Key: ``storagetype:tiering``
525- Possible values:
526
527  - ``StartHighThenAuto``
528  - ``Auto``
529  - ``HighestAvailable``
530  - ``LowestAvailable``
531  - ``NoMovement``
532
533- Default: ``StartHighThenAuto``
534
535VNX supports fully automated storage tiering which requires the FAST license
536activated on the VNX. The OpenStack administrator can use the extra spec key
537``storagetype:tiering`` to set the tiering policy of a volume and use the key
538``fast_support='<is> True'`` to let Block Storage scheduler find a volume back
539end which manages a VNX with FAST license activated. Here are the five
540supported values for the extra spec key ``storagetype:tiering``:
541
542Run the following commands to create a volume type with tiering policy:
543
544.. code-block:: console
545
546   $ openstack volume type create ThinVolumeOnAutoTier
547   $ openstack volume type set --property provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True' ThinVolumeOnAutoTier
548
549.. note::
550
551   The tiering policy cannot be applied to a deduplicated volume. Tiering
552   policy of the deduplicated LUN align with the settings of the pool.
553
554FAST cache support
555------------------
556
557-  Key: ``fast_cache_enabled``
558
559-  Possible values:
560
561   -  ``True``
562
563   -  ``False``
564
565-  Default: ``False``
566
567VNX has FAST Cache feature which requires the FAST Cache license activated on
568the VNX. Volume will be created on the backend with FAST cache enabled when
569``<is> True`` is specified.
570
571Pool name
572---------
573
574-  Key: ``pool_name``
575
576-  Possible values: name of the storage pool managed by cinder
577
578-  Default: None
579
580If the user wants to create a volume on a certain storage pool in a back end
581that manages multiple pools, a volume type with a extra spec specified storage
582pool should be created first, then the user can use this volume type to create
583the volume.
584
585Run the following commands to create the volume type:
586
587.. code-block:: console
588
589   $ openstack volume type create HighPerf
590   $ openstack volume type set --property pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41 HighPerf
591
592Obsolete extra specs
593--------------------
594
595.. note::
596
597   *DO NOT* use the following obsolete extra spec keys:
598
599   - ``storagetype:provisioning``
600   - ``storagetype:pool``
601
602Force detach
603------------
604
605The user could use `os-force_detach` action to detach a volume from all its attached hosts.
606For more detail, please refer to
607https://developer.openstack.org/api-ref/block-storage/v2/?expanded=force-detach-volume-detail#force-detach-volume
608
609
610Advanced features
611~~~~~~~~~~~~~~~~~
612
613Snap copy
614---------
615
616- Metadata Key: ``snapcopy``
617- Possible Values:
618
619  - ``True`` or ``true``
620  - ``False`` or ``false``
621
622- Default: `False`
623
624VNX driver supports snap copy which accelerates the process for
625creating a copied volume.
626
627By default, the driver will use `asynchronous migration support`_, which will
628start a VNX migration session. When snap copy is used, driver creates a
629snapshot and mounts it as a volume for the 2 kinds of operations which will be
630instant even for large volumes.
631
632To enable this functionality, append ``--metadata snapcopy=True``
633when creating cloned volume or creating volume from snapshot.
634
635.. code-block:: console
636
637   $ cinder create --source-volid <source-void> --name "cloned_volume" --metadata snapcopy=True
638
639Or
640
641.. code-block:: console
642
643   $ cinder create --snapshot-id <snapshot-id> --name "vol_from_snapshot" --metadata snapcopy=True
644
645
646The newly created volume is a snap copy instead of
647a full copy. If a full copy is needed, retype or migrate can be used
648to convert the snap-copy volume to a full-copy volume which may be
649time-consuming.
650
651You can determine whether the volume is a snap-copy volume or not by
652showing its metadata. If the ``snapcopy`` in metadata is ``True`` or ``true``,
653the volume is a snap-copy volume. Otherwise, it is a full-copy volume.
654
655.. code-block:: console
656
657   $ cinder metadata-show <volume>
658
659**Constraints**
660
661- The number of snap-copy volumes created from a single source volume is
662  limited to 255 at one point in time.
663- The source volume which has snap-copy volume can not be deleted or migrated.
664- snapcopy volume will be change to full-copy volume after host-assisted or
665  storage-assisted migration.
666- snapcopy volume can not be added to consisgroup because of VNX limitation.
667
668Efficient non-disruptive volume backup
669--------------------------------------
670
671The default implementation in Block Storage for non-disruptive volume backup is
672not efficient since a cloned volume will be created during backup.
673
674The approach of efficient backup is to create a snapshot for the volume and
675connect this snapshot (a mount point in VNX) to the Block Storage host for
676volume backup. This eliminates migration time involved in volume clone.
677
678**Constraints**
679
680-  Backup creation for a snap-copy volume is not allowed if the volume
681   status is ``in-use`` since snapshot cannot be taken from this volume.
682
683Configurable migration rate
684---------------------------
685
686VNX cinder driver is leveraging the LUN migration from the VNX. LUN migration
687is involved in cloning, migrating, retyping, and creating volume from snapshot.
688When admin set ``migrate_rate`` in volume's ``metadata``, VNX driver can start
689migration with specified rate. The available values for the ``migrate_rate``
690are ``high``, ``asap``, ``low`` and ``medium``.
691
692The following is an example to set ``migrate_rate`` to ``asap``:
693
694.. code-block:: console
695
696   $ cinder metadata <volume-id> set migrate_rate=asap
697
698After set, any cinder volume operations involving VNX LUN migration will
699take the value as the migration rate. To restore the migration rate to
700default, unset the metadata as following:
701
702.. code-block:: console
703
704   $ cinder metadata <volume-id> unset migrate_rate
705
706.. note::
707
708   Do not use the ``asap`` migration rate when the system is in production, as the normal
709   host I/O may be interrupted. Use asap only when the system is offline
710   (free of any host-level I/O).
711
712Replication v2.1 support
713------------------------
714
715Cinder introduces Replication v2.1 support in Mitaka, it supports
716fail-over and fail-back replication for specific back end. In VNX cinder
717driver, **MirrorView** is used to set up replication for the volume.
718
719To enable this feature, you need to set configuration in ``cinder.conf`` as
720below:
721
722.. code-block:: ini
723
724   replication_device = backend_id:<secondary VNX serial number>,
725                        san_ip:192.168.1.2,
726                        san_login:admin,
727                        san_password:admin,
728                        naviseccli_path:/opt/Navisphere/bin/naviseccli,
729                        storage_vnx_authentication_type:global,
730                        storage_vnx_security_file_dir:
731
732Currently, only synchronized mode **MirrorView** is supported, and one volume
733can only have 1 secondary storage system. Therefore, you can have only one
734``replication_device`` presented in driver configuration section.
735
736To create a replication enabled volume, you need to create a volume type:
737
738.. code-block:: console
739
740   $ openstack volume type create replication-type
741   $ openstack volume type set --property replication_enabled="<is> True" replication-type
742
743And then create volume with above volume type:
744
745.. code-block:: console
746
747   $ openstack volume create replication-volume --type replication-type --size 1
748
749**Supported operations**
750
751- Create volume
752- Create cloned volume
753- Create volume from snapshot
754- Fail-over volume:
755
756  .. code-block:: console
757
758     $ cinder failover-host --backend_id <secondary VNX serial number> <hostname>
759
760- Fail-back volume:
761
762  .. code-block:: console
763
764     $ cinder failover-host --backend_id default <hostname>
765
766**Requirements**
767
768- 2 VNX systems must be in same domain.
769- For iSCSI MirrorView, user needs to setup iSCSI connection before enable
770  replication in Cinder.
771- For FC MirrorView, user needs to zone specific FC ports from 2
772  VNX system together.
773- MirrorView Sync enabler( **MirrorView/S** ) installed on both systems.
774- Write intent log enabled on both VNX systems.
775
776For more information on how to configure, please refer to: `MirrorView-Knowledgebook:-Releases-30-–-33 <https://support.emc.com/docu32906_MirrorView-Knowledgebook:-Releases-30-%E2%80%93-33---A-Detailed-Review.pdf?language=en_US>`_
777
778Asynchronous migration support
779------------------------------
780
781VNX Cinder driver now supports asynchronous migration during volume cloning.
782
783The driver now using asynchronous migration when creating a volume from source
784as the default cloning method. The driver will return immediately after the
785migration session starts on the VNX, which dramatically reduces the time before
786a volume is available for use.
787
788To disable this feature, user can add ``--metadata async_migrate=False`` when
789creating new volume from source.
790
791
792Best practice
793~~~~~~~~~~~~~
794
795.. _multipath-setup:
796
797Multipath setup
798---------------
799
800Enabling multipath volume access is recommended for robust data access.
801The major configuration includes:
802
803#. Install ``multipath-tools``, ``sysfsutils`` and ``sg3-utils`` on the
804   nodes hosting compute and ``cinder-volume`` services. Check
805   the operating system manual for the system distribution for specific
806   installation steps. For Red Hat based distributions, they should be
807   ``device-mapper-multipath``, ``sysfsutils`` and ``sg3_utils``.
808
809#. Specify ``use_multipath_for_image_xfer=true`` in the ``cinder.conf`` file
810   for each FC/iSCSI back end.
811
812#. Specify ``iscsi_use_multipath=True`` in ``libvirt`` section of the
813   ``nova.conf`` file. This option is valid for both iSCSI and FC driver.
814
815For multipath-tools, here is an EMC recommended sample of
816``/etc/multipath.conf`` file.
817
818``user_friendly_names`` is not specified in the configuration and thus
819it will take the default value ``no``. It is not recommended to set it
820to ``yes`` because it may fail operations such as VM live migration.
821
822.. code-block:: vim
823
824   blacklist {
825       # Skip the files under /dev that are definitely not FC/iSCSI devices
826       # Different system may need different customization
827       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
828       devnode "^hd[a-z][0-9]*"
829       devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
830
831       # Skip LUNZ device from VNX
832       device {
833           vendor "DGC"
834           product "LUNZ"
835           }
836   }
837
838   defaults {
839       user_friendly_names no
840       flush_on_last_del yes
841   }
842
843   devices {
844       # Device attributed for EMC CLARiiON and VNX series ALUA
845       device {
846           vendor "DGC"
847           product ".*"
848           product_blacklist "LUNZ"
849           path_grouping_policy group_by_prio
850           path_selector "round-robin 0"
851           path_checker emc_clariion
852           features "1 queue_if_no_path"
853           hardware_handler "1 alua"
854           prio alua
855           failback immediate
856       }
857   }
858
859.. note::
860
861   When multipath is used in OpenStack, multipath faulty devices may
862   come out in Nova-Compute nodes due to different issues (`Bug
863   1336683 <https://bugs.launchpad.net/nova/+bug/1336683>`_ is a
864   typical example).
865
866A solution to completely avoid faulty devices has not been found yet.
867``faulty_device_cleanup.py`` mitigates this issue when VNX iSCSI storage is
868used. Cloud administrators can deploy the script in all Nova-Compute nodes and
869use a CRON job to run the script on each Nova-Compute node periodically so that
870faulty devices will not stay too long. Refer to: `VNX faulty device
871cleanup <https://github.com/emc-openstack/vnx-faulty-device-cleanup>`_ for
872detailed usage and the script.
873
874Restrictions and limitations
875~~~~~~~~~~~~~~~~~~~~~~~~~~~~
876
877iSCSI port cache
878----------------
879
880EMC VNX iSCSI driver caches the iSCSI ports information, so that the user
881should restart the ``cinder-volume`` service or wait for seconds (which is
882configured by ``periodic_interval`` in the ``cinder.conf`` file) before any
883volume attachment operation after changing the iSCSI port configurations.
884Otherwise the attachment may fail because the old iSCSI port configurations
885were used.
886
887No extending for volume with snapshots
888--------------------------------------
889
890VNX does not support extending the thick volume which has a snapshot. If the
891user tries to extend a volume which has a snapshot, the status of the volume
892would change to ``error_extending``.
893
894Limitations for deploying cinder on computer node
895-------------------------------------------------
896
897It is not recommended to deploy the driver on a compute node if ``cinder
898upload-to-image --force True`` is used against an in-use volume. Otherwise,
899``cinder upload-to-image --force True`` will terminate the data access of the
900vm instance to the volume.
901
902Storage group with host names in VNX
903------------------------------------
904
905When the driver notices that there is no existing storage group that has the
906host name as the storage group name, it will create the storage group and also
907add the compute node's or Block Storage node's registered initiators into the
908storage group.
909
910If the driver notices that the storage group already exists, it will assume
911that the registered initiators have also been put into it and skip the
912operations above for better performance.
913
914It is recommended that the storage administrator does not create the storage
915group manually and instead relies on the driver for the preparation. If the
916storage administrator needs to create the storage group manually for some
917special requirements, the correct registered initiators should be put into the
918storage group as well (otherwise the following volume attaching operations will
919fail).
920
921EMC storage-assisted volume migration
922-------------------------------------
923
924EMC VNX driver supports storage-assisted volume migration, when the user starts
925migrating with ``cinder migrate --force-host-copy False <volume_id> <host>`` or
926``cinder migrate <volume_id> <host>``, cinder will try to leverage the VNX's
927native volume migration functionality.
928
929In following scenarios, VNX storage-assisted volume migration will not be
930triggered:
931
932- ``in-use`` volume migration between back ends with different storage
933  protocol, for example, FC and iSCSI.
934- Volume is to be migrated across arrays.
935
936Appendix
937~~~~~~~~
938
939.. _authenticate-by-security-file:
940
941Authenticate by security file
942-----------------------------
943
944VNX credentials are necessary when the driver connects to the VNX system.
945Credentials in ``global``, ``local`` and ``ldap`` scopes are supported. There
946are two approaches to provide the credentials.
947
948The recommended one is using the Navisphere CLI security file to provide the
949credentials which can get rid of providing the plain text credentials in the
950configuration file. Following is the instruction on how to do this.
951
952#. Find out the Linux user id of the ``cinder-volume`` processes. Assuming the
953   ``cinder-volume`` service is running by the account ``cinder``.
954
955#. Run ``su`` as root user.
956
957#. In ``/etc/passwd`` file, change
958   ``cinder:x:113:120::/var/lib/cinder:/bin/false``
959   to ``cinder:x:113:120::/var/lib/cinder:/bin/bash`` (This temporary change is
960   to make step 4 work.)
961
962#. Save the credentials on behalf of ``cinder`` user to a security file
963   (assuming the array credentials are ``admin/admin`` in ``global`` scope). In
964   the command below, the ``-secfilepath`` switch is used to specify the
965   location to save the security file.
966
967   .. code-block:: console
968
969      # su -l cinder -c \
970        '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
971
972#. Change ``cinder:x:113:120::/var/lib/cinder:/bin/bash`` back to
973   ``cinder:x:113:120::/var/lib/cinder:/bin/false`` in ``/etc/passwd`` file.
974
975#. Remove the credentials options ``san_login``, ``san_password`` and
976   ``storage_vnx_authentication_type`` from ``cinder.conf`` file. (normally
977   it is ``/etc/cinder/cinder.conf`` file). Add option
978   ``storage_vnx_security_file_dir`` and set its value to the directory path of
979   your security file generated in the above step. Omit this option if
980   ``-secfilepath`` is not used in the above step.
981
982#. Restart the ``cinder-volume`` service to validate the change.
983
984
985.. _register-fc-port-with-vnx:
986
987Register FC port with VNX
988-------------------------
989
990This configuration is only required when ``initiator_auto_registration=False``.
991
992To access VNX storage, the Compute nodes should be registered on VNX first if
993initiator auto registration is not enabled.
994
995To perform ``Copy Image to Volume`` and ``Copy Volume to Image`` operations,
996the nodes running the ``cinder-volume`` service (Block Storage nodes) must be
997registered with the VNX as well.
998
999The steps mentioned below are for the compute nodes. Follow the same
1000steps for the Block Storage nodes also (The steps can be skipped if initiator
1001auto registration is enabled).
1002
1003#. Assume ``20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2`` is the WWN of a
1004   FC initiator port name of the compute node whose host name and IP are
1005   ``myhost1`` and ``10.10.61.1``. Register
1006   ``20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2`` in Unisphere:
1007
1008#. Log in to :guilabel:`Unisphere`, go to
1009   :menuselection:`FNM0000000000 > Hosts > Initiators`.
1010
1011#. Refresh and wait until the initiator
1012   ``20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2`` with SP Port ``A-1``
1013   appears.
1014
1015#. Click the :guilabel:`Register` button, select :guilabel:`CLARiiON/VNX`
1016   and enter the host name (which is the output of the :command:`hostname`
1017   command) and IP address:
1018
1019   -  Hostname: ``myhost1``
1020
1021   -  IP: ``10.10.61.1``
1022
1023   -  Click :guilabel:`Register`.
1024
1025#. Then host ``10.10.61.1`` will appear under
1026   :menuselection:`Hosts > Host List` as well.
1027
1028#. Register the ``wwn`` with more ports if needed.
1029
1030.. _register-iscsi-port-with-vnx:
1031
1032Register iSCSI port with VNX
1033----------------------------
1034
1035This configuration is only required when ``initiator_auto_registration=False``.
1036
1037To access VNX storage, the compute nodes should be registered on VNX first if
1038initiator auto registration is not enabled.
1039
1040To perform ``Copy Image to Volume`` and ``Copy Volume to Image`` operations,
1041the nodes running the ``cinder-volume`` service (Block Storage nodes) must be
1042registered with the VNX as well.
1043
1044The steps mentioned below are for the compute nodes. Follow the
1045same steps for the Block Storage nodes also (The steps can be skipped if
1046initiator auto registration is enabled).
1047
1048#. On the compute node with IP address ``10.10.61.1`` and host name ``myhost1``,
1049   execute the following commands (assuming ``10.10.61.35`` is the iSCSI
1050   target):
1051
1052   #. Start the iSCSI initiator service on the node:
1053
1054      .. code-block:: console
1055
1056         # /etc/init.d/open-iscsi start
1057
1058   #. Discover the iSCSI target portals on VNX:
1059
1060      .. code-block:: console
1061
1062         # iscsiadm -m discovery -t st -p 10.10.61.35
1063
1064   #. Change directory to ``/etc/iscsi`` :
1065
1066      .. code-block:: console
1067
1068         # cd /etc/iscsi
1069
1070   #. Find out the ``iqn`` of the node:
1071
1072      .. code-block:: console
1073
1074         # more initiatorname.iscsi
1075
1076#. Log in to :guilabel:`VNX` from the compute node using the target
1077   corresponding to the SPA port:
1078
1079   .. code-block:: console
1080
1081      # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
1082
1083#. Assume ``iqn.1993-08.org.debian:01:1a2b3c4d5f6g`` is the initiator name of
1084   the compute node. Register ``iqn.1993-08.org.debian:01:1a2b3c4d5f6g`` in
1085   Unisphere:
1086
1087   #. Log in to :guilabel:`Unisphere`, go to
1088      :menuselection:`FNM0000000000 > Hosts > Initiators`.
1089
1090   #. Refresh and wait until the initiator
1091      ``iqn.1993-08.org.debian:01:1a2b3c4d5f6g`` with SP Port ``A-8v0``
1092      appears.
1093
1094   #. Click the :guilabel:`Register` button, select :guilabel:`CLARiiON/VNX`
1095      and enter the host name
1096      (which is the output of the :command:`hostname` command) and IP address:
1097
1098      -  Hostname: ``myhost1``
1099
1100      -  IP: ``10.10.61.1``
1101
1102      -  Click :guilabel:`Register`.
1103
1104   #. Then host ``10.10.61.1`` will appear under
1105      :menuselection:`Hosts > Host List` as well.
1106
1107#. Log out :guilabel:`iSCSI` on the node:
1108
1109   .. code-block:: console
1110
1111      # iscsiadm -m node -u
1112
1113#. Log in to :guilabel:`VNX` from the compute node using the target
1114   corresponding to the SPB port:
1115
1116   .. code-block:: console
1117
1118      # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
1119
1120#. In ``Unisphere``, register the initiator with the SPB port.
1121
1122#. Log out :guilabel:`iSCSI` on the node:
1123
1124   .. code-block:: console
1125
1126      # iscsiadm -m node -u
1127
1128#. Register the ``iqn`` with more ports if needed.
1129