1"""
2Manage VMware ESXi Hosts.
3
4.. versionadded:: 2015.8.4
5
6Dependencies
7============
8
9- pyVmomi Python Module
10- ESXCLI
11
12
13pyVmomi
14-------
15
16PyVmomi can be installed via pip:
17
18.. code-block:: bash
19
20    pip install pyVmomi
21
22.. note::
23
24    Version 6.0 of pyVmomi has some problems with SSL error handling on certain
25    versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
26    Python 2.7.9, or newer must be present. This is due to an upstream dependency
27    in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
28    version of Python is not in the supported range, you will need to install an
29    earlier version of pyVmomi. See `Issue #29537`_ for more information.
30
31.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
32
33Based on the note above, to install an earlier version of pyVmomi than the
34version currently listed in PyPi, run the following:
35
36.. code-block:: bash
37
38    pip install pyVmomi==5.5.0.2014.1.1
39
40The 5.5.0.2014.1.1 is a known stable version that this original ESXi State
41Module was developed against.
42
43ESXCLI
44------
45
46Currently, about a third of the functions used in the vSphere Execution Module require
47the ESXCLI package be installed on the machine running the Proxy Minion process.
48
49The ESXCLI package is also referred to as the VMware vSphere CLI, or vCLI. VMware
50provides vCLI package installation instructions for `vSphere 5.5`_ and
51`vSphere 6.0`_.
52
53.. _vSphere 5.5: http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vcli.getstart.doc/cli_install.4.2.html
54.. _vSphere 6.0: http://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vcli.getstart.doc/cli_install.4.2.html
55
56Once all of the required dependencies are in place and the vCLI package is
57installed, you can check to see if you can connect to your ESXi host or vCenter
58server by running the following command:
59
60.. code-block:: bash
61
62    esxcli -s <host-location> -u <username> -p <password> system syslog config get
63
64If the connection was successful, ESXCLI was successfully installed on your system.
65You should see output related to the ESXi host's syslog configuration.
66
67.. note::
68
69    Be aware that some functionality in this state module may depend on the
70    type of license attached to the ESXi host.
71
72    For example, certain services are only available to manipulate service state
73    or policies with a VMware vSphere Enterprise or Enterprise Plus license, while
74    others are available with a Standard license. The ``ntpd`` service is restricted
75    to an Enterprise Plus license, while ``ssh`` is available via the Standard
76    license.
77
78    Please see the `vSphere Comparison`_ page for more information.
79
80.. _vSphere Comparison: https://www.vmware.com/products/vsphere/compare
81
82About
83-----
84
85This state module was written to be used in conjunction with Salt's
86:mod:`ESXi Proxy Minion <salt.proxy.esxi>`. For a tutorial on how to use Salt's
87ESXi Proxy Minion, please refer to the
88:ref:`ESXi Proxy Minion Tutorial <tutorial-esxi-proxy>` for
89configuration examples, dependency installation instructions, how to run remote
90execution functions against ESXi hosts via a Salt Proxy Minion, and a larger state
91example.
92"""
93
94import logging
95import re
96import sys
97
98import salt.utils.files
99from salt.config.schemas.esxi import DiskGroupsDiskScsiAddressSchema, HostCacheSchema
100from salt.exceptions import (
101    ArgumentValueError,
102    CommandExecutionError,
103    InvalidConfigError,
104    VMwareApiError,
105    VMwareObjectRetrievalError,
106    VMwareSaltError,
107)
108from salt.utils.decorators import depends
109
110# External libraries
111try:
112    import jsonschema
113
114    HAS_JSONSCHEMA = True
115except ImportError:
116    HAS_JSONSCHEMA = False
117
118# Get Logging Started
119log = logging.getLogger(__name__)
120
121try:
122    from pyVmomi import VmomiSupport
123
124    # We check the supported vim versions to infer the pyVmomi version
125    if (
126        "vim25/6.0" in VmomiSupport.versionMap
127        and sys.version_info > (2, 7)
128        and sys.version_info < (2, 7, 9)
129    ):
130
131        log.debug(
132            "pyVmomi not loaded: Incompatible versions of Python. See Issue #29537."
133        )
134        raise ImportError()
135    HAS_PYVMOMI = True
136except ImportError:
137    HAS_PYVMOMI = False
138
139
140def __virtual__():
141    if "esxi.cmd" in __salt__:
142        return True
143    return (False, "esxi module could not be loaded")
144
145
146def coredump_configured(name, enabled, dump_ip, host_vnic="vmk0", dump_port=6500):
147    """
148    Ensures a host's core dump configuration.
149
150    name
151        Name of the state.
152
153    enabled
154        Sets whether or not ESXi core dump collection should be enabled.
155        This is a boolean value set to ``True`` or ``False`` to enable
156        or disable core dumps.
157
158        Note that ESXi requires that the core dump must be enabled before
159        any other parameters may be set. This also affects the ``changes``
160        results in the state return dictionary. If ``enabled`` is ``False``,
161        we can't obtain any previous settings to compare other state variables,
162        resulting in many ``old`` references returning ``None``.
163
164        Once ``enabled`` is ``True`` the ``changes`` dictionary comparisons
165        will be more accurate. This is due to the way the system coredemp
166        network configuration command returns data.
167
168    dump_ip
169        The IP address of host that will accept the dump.
170
171    host_vnic
172        Host VNic port through which to communicate. Defaults to ``vmk0``.
173
174    dump_port
175        TCP port to use for the dump. Defaults to ``6500``.
176
177    Example:
178
179    .. code-block:: yaml
180
181        configure-host-coredump:
182          esxi.coredump_configured:
183            - enabled: True
184            - dump_ip: 'my-coredump-ip.example.com'
185
186    """
187    ret = {"name": name, "result": False, "changes": {}, "comment": ""}
188    esxi_cmd = "esxi.cmd"
189    enabled_msg = (
190        "ESXi requires that the core dump must be enabled "
191        "before any other parameters may be set."
192    )
193    host = __pillar__["proxy"]["host"]
194
195    current_config = __salt__[esxi_cmd]("get_coredump_network_config").get(host)
196    error = current_config.get("Error")
197    if error:
198        ret["comment"] = "Error: {}".format(error)
199        return ret
200
201    current_config = current_config.get("Coredump Config")
202    current_enabled = current_config.get("enabled")
203
204    # Configure coredump enabled state, if there are changes.
205    if current_enabled != enabled:
206        enabled_changes = {"enabled": {"old": current_enabled, "new": enabled}}
207        # Only run the command if not using test=True
208        if not __opts__["test"]:
209            response = __salt__[esxi_cmd](
210                "coredump_network_enable", enabled=enabled
211            ).get(host)
212            error = response.get("Error")
213            if error:
214                ret["comment"] = "Error: {}".format(error)
215                return ret
216
217            # Allow users to disable core dump, but then return since
218            # nothing else can be set if core dump is disabled.
219            if not enabled:
220                ret["result"] = True
221                ret["comment"] = enabled_msg
222                ret["changes"].update(enabled_changes)
223                return ret
224
225        ret["changes"].update(enabled_changes)
226
227    elif not enabled:
228        # If current_enabled and enabled match, but are both False,
229        # We must return before configuring anything. This isn't a
230        # failure as core dump may be disabled intentionally.
231        ret["result"] = True
232        ret["comment"] = enabled_msg
233        return ret
234
235    # Test for changes with all remaining configurations. The changes flag is used
236    # To detect changes, and then set_coredump_network_config is called one time.
237    changes = False
238    current_ip = current_config.get("ip")
239    if current_ip != dump_ip:
240        ret["changes"].update({"dump_ip": {"old": current_ip, "new": dump_ip}})
241        changes = True
242
243    current_vnic = current_config.get("host_vnic")
244    if current_vnic != host_vnic:
245        ret["changes"].update({"host_vnic": {"old": current_vnic, "new": host_vnic}})
246        changes = True
247
248    current_port = current_config.get("port")
249    if current_port != str(dump_port):
250        ret["changes"].update(
251            {"dump_port": {"old": current_port, "new": str(dump_port)}}
252        )
253        changes = True
254
255    # Only run the command if not using test=True and changes were detected.
256    if not __opts__["test"] and changes is True:
257        response = __salt__[esxi_cmd](
258            "set_coredump_network_config",
259            dump_ip=dump_ip,
260            host_vnic=host_vnic,
261            dump_port=dump_port,
262        ).get(host)
263        if response.get("success") is False:
264            msg = response.get("stderr")
265            if not msg:
266                msg = response.get("stdout")
267            ret["comment"] = "Error: {}".format(msg)
268            return ret
269
270    ret["result"] = True
271    if ret["changes"] == {}:
272        ret["comment"] = "Core Dump configuration is already in the desired state."
273        return ret
274
275    if __opts__["test"]:
276        ret["result"] = None
277        ret["comment"] = "Core dump configuration will change."
278
279    return ret
280
281
282def password_present(name, password):
283    """
284    Ensures the given password is set on the ESXi host. Passwords cannot be obtained from
285    host, so if a password is set in this state, the ``vsphere.update_host_password``
286    function will always run (except when using test=True functionality) and the state's
287    changes dictionary will always be populated.
288
289    The username for which the password will change is the same username that is used to
290    authenticate against the ESXi host via the Proxy Minion. For example, if the pillar
291    definition for the proxy username is defined as ``root``, then the username that the
292    password will be updated for via this state is ``root``.
293
294    name
295        Name of the state.
296
297    password
298        The new password to change on the host.
299
300    Example:
301
302    .. code-block:: yaml
303
304        configure-host-password:
305          esxi.password_present:
306            - password: 'new-bad-password'
307    """
308    ret = {
309        "name": name,
310        "result": True,
311        "changes": {"old": "unknown", "new": "********"},
312        "comment": "Host password was updated.",
313    }
314    esxi_cmd = "esxi.cmd"
315
316    if __opts__["test"]:
317        ret["result"] = None
318        ret["comment"] = "Host password will change."
319        return ret
320    else:
321        try:
322            __salt__[esxi_cmd]("update_host_password", new_password=password)
323        except CommandExecutionError as err:
324            ret["result"] = False
325            ret["comment"] = "Error: {}".format(err)
326            return ret
327
328    return ret
329
330
331def ntp_configured(
332    name,
333    service_running,
334    ntp_servers=None,
335    service_policy=None,
336    service_restart=False,
337    update_datetime=False,
338):
339    """
340    Ensures a host's NTP server configuration such as setting NTP servers, ensuring the
341    NTP daemon is running or stopped, or restarting the NTP daemon for the ESXi host.
342
343    name
344        Name of the state.
345
346    service_running
347        Ensures the running state of the ntp daemon for the host. Boolean value where
348        ``True`` indicates that ntpd should be running and ``False`` indicates that it
349        should be stopped.
350
351    ntp_servers
352        A list of servers that should be added to the ESXi host's NTP configuration.
353
354    service_policy
355        The policy to set for the NTP service.
356
357        .. note::
358
359            When setting the service policy to ``off`` or ``on``, you *must* quote the
360            setting. If you don't, the yaml parser will set the string to a boolean,
361            which will cause trouble checking for stateful changes and will error when
362            trying to set the policy on the ESXi host.
363
364
365    service_restart
366        If set to ``True``, the ntp daemon will be restarted, regardless of its previous
367        running state. Default is ``False``.
368
369    update_datetime
370        If set to ``True``, the date/time on the given host will be updated to UTC.
371        Default setting is ``False``. This option should be used with caution since
372        network delays and execution delays can result in time skews.
373
374    Example:
375
376    .. code-block:: yaml
377
378        configure-host-ntp:
379          esxi.ntp_configured:
380            - service_running: True
381            - ntp_servers:
382              - 192.174.1.100
383              - 192.174.1.200
384            - service_policy: 'on'
385            - service_restart: True
386
387    """
388    ret = {"name": name, "result": False, "changes": {}, "comment": ""}
389    esxi_cmd = "esxi.cmd"
390    host = __pillar__["proxy"]["host"]
391    ntpd = "ntpd"
392
393    ntp_config = __salt__[esxi_cmd]("get_ntp_config").get(host)
394    ntp_running = __salt__[esxi_cmd]("get_service_running", service_name=ntpd).get(host)
395    error = ntp_running.get("Error")
396    if error:
397        ret["comment"] = "Error: {}".format(error)
398        return ret
399    ntp_running = ntp_running.get(ntpd)
400
401    # Configure NTP Servers for the Host
402    if ntp_servers and set(ntp_servers) != set(ntp_config):
403        # Only run the command if not using test=True
404        if not __opts__["test"]:
405            response = __salt__[esxi_cmd](
406                "set_ntp_config", ntp_servers=ntp_servers
407            ).get(host)
408            error = response.get("Error")
409            if error:
410                ret["comment"] = "Error: {}".format(error)
411                return ret
412        # Set changes dictionary for ntp_servers
413        ret["changes"].update({"ntp_servers": {"old": ntp_config, "new": ntp_servers}})
414
415    # Configure service_running state
416    if service_running != ntp_running:
417        # Only run the command if not using test=True
418        if not __opts__["test"]:
419            # Start ntdp if service_running=True
420            if ntp_running is True:
421                response = __salt__[esxi_cmd]("service_start", service_name=ntpd).get(
422                    host
423                )
424                error = response.get("Error")
425                if error:
426                    ret["comment"] = "Error: {}".format(error)
427                    return ret
428            # Stop ntpd if service_running=False
429            else:
430                response = __salt__[esxi_cmd]("service_stop", service_name=ntpd).get(
431                    host
432                )
433                error = response.get("Error")
434                if error:
435                    ret["comment"] = "Error: {}".format(error)
436                    return ret
437        ret["changes"].update(
438            {"service_running": {"old": ntp_running, "new": service_running}}
439        )
440
441    # Configure service_policy
442    if service_policy:
443        current_service_policy = __salt__[esxi_cmd](
444            "get_service_policy", service_name=ntpd
445        ).get(host)
446        error = current_service_policy.get("Error")
447        if error:
448            ret["comment"] = "Error: {}".format(error)
449            return ret
450        current_service_policy = current_service_policy.get(ntpd)
451
452        if service_policy != current_service_policy:
453            # Only run the command if not using test=True
454            if not __opts__["test"]:
455                response = __salt__[esxi_cmd](
456                    "set_service_policy",
457                    service_name=ntpd,
458                    service_policy=service_policy,
459                ).get(host)
460                error = response.get("Error")
461                if error:
462                    ret["comment"] = "Error: {}".format(error)
463                    return ret
464            ret["changes"].update(
465                {
466                    "service_policy": {
467                        "old": current_service_policy,
468                        "new": service_policy,
469                    }
470                }
471            )
472
473    # Update datetime, if requested.
474    if update_datetime:
475        # Only run the command if not using test=True
476        if not __opts__["test"]:
477            response = __salt__[esxi_cmd]("update_host_datetime").get(host)
478            error = response.get("Error")
479            if error:
480                ret["comment"] = "Error: {}".format(error)
481                return ret
482        ret["changes"].update(
483            {"update_datetime": {"old": "", "new": "Host datetime was updated."}}
484        )
485
486    # Restart ntp_service if service_restart=True
487    if service_restart:
488        # Only run the command if not using test=True
489        if not __opts__["test"]:
490            response = __salt__[esxi_cmd]("service_restart", service_name=ntpd).get(
491                host
492            )
493            error = response.get("Error")
494            if error:
495                ret["comment"] = "Error: {}".format(error)
496                return ret
497        ret["changes"].update(
498            {"service_restart": {"old": "", "new": "NTP Daemon Restarted."}}
499        )
500
501    ret["result"] = True
502    if ret["changes"] == {}:
503        ret["comment"] = "NTP is already in the desired state."
504        return ret
505
506    if __opts__["test"]:
507        ret["result"] = None
508        ret["comment"] = "NTP state will change."
509
510    return ret
511
512
513def vmotion_configured(name, enabled, device="vmk0"):
514    """
515    Configures a host's VMotion properties such as enabling VMotion and setting
516    the device VirtualNic that VMotion will use.
517
518    name
519        Name of the state.
520
521    enabled
522        Ensures whether or not VMotion should be enabled on a host as a boolean
523        value where ``True`` indicates that VMotion should be enabled and ``False``
524        indicates that VMotion should be disabled.
525
526    device
527        The device that uniquely identifies the VirtualNic that will be used for
528        VMotion for the host. Defaults to ``vmk0``.
529
530    Example:
531
532    .. code-block:: yaml
533
534        configure-vmotion:
535          esxi.vmotion_configured:
536            - enabled: True
537            - device: sample-device
538
539    """
540    ret = {"name": name, "result": False, "changes": {}, "comment": ""}
541    esxi_cmd = "esxi.cmd"
542    host = __pillar__["proxy"]["host"]
543
544    current_vmotion_enabled = __salt__[esxi_cmd]("get_vmotion_enabled").get(host)
545    current_vmotion_enabled = current_vmotion_enabled.get("VMotion Enabled")
546
547    # Configure VMotion Enabled state, if changed.
548    if enabled != current_vmotion_enabled:
549        # Only run the command if not using test=True
550        if not __opts__["test"]:
551            # Enable VMotion if enabled=True
552            if enabled is True:
553                response = __salt__[esxi_cmd]("vmotion_enable", device=device).get(host)
554                error = response.get("Error")
555                if error:
556                    ret["comment"] = "Error: {}".format(error)
557                    return ret
558            # Disable VMotion if enabled=False
559            else:
560                response = __salt__[esxi_cmd]("vmotion_disable").get(host)
561                error = response.get("Error")
562                if error:
563                    ret["comment"] = "Error: {}".format(error)
564                    return ret
565        ret["changes"].update(
566            {"enabled": {"old": current_vmotion_enabled, "new": enabled}}
567        )
568
569    ret["result"] = True
570    if ret["changes"] == {}:
571        ret["comment"] = "VMotion configuration is already in the desired state."
572        return ret
573
574    if __opts__["test"]:
575        ret["result"] = None
576        ret["comment"] = "VMotion configuration will change."
577
578    return ret
579
580
581def vsan_configured(name, enabled, add_disks_to_vsan=False):
582    """
583    Configures a host's VSAN properties such as enabling or disabling VSAN, or
584    adding VSAN-eligible disks to the VSAN system for the host.
585
586    name
587        Name of the state.
588
589    enabled
590        Ensures whether or not VSAN should be enabled on a host as a boolean
591        value where ``True`` indicates that VSAN should be enabled and ``False``
592        indicates that VSAN should be disabled.
593
594    add_disks_to_vsan
595        If set to ``True``, any VSAN-eligible disks for the given host will be added
596        to the host's VSAN system. Default is ``False``.
597
598    Example:
599
600    .. code-block:: yaml
601
602        configure-host-vsan:
603          esxi.vsan_configured:
604            - enabled: True
605            - add_disks_to_vsan: True
606
607    """
608    ret = {"name": name, "result": False, "changes": {}, "comment": ""}
609    esxi_cmd = "esxi.cmd"
610    host = __pillar__["proxy"]["host"]
611
612    current_vsan_enabled = __salt__[esxi_cmd]("get_vsan_enabled").get(host)
613    error = current_vsan_enabled.get("Error")
614    if error:
615        ret["comment"] = "Error: {}".format(error)
616        return ret
617    current_vsan_enabled = current_vsan_enabled.get("VSAN Enabled")
618
619    # Configure VSAN Enabled state, if changed.
620    if enabled != current_vsan_enabled:
621        # Only run the command if not using test=True
622        if not __opts__["test"]:
623            # Enable VSAN if enabled=True
624            if enabled is True:
625                response = __salt__[esxi_cmd]("vsan_enable").get(host)
626                error = response.get("Error")
627                if error:
628                    ret["comment"] = "Error: {}".format(error)
629                    return ret
630            # Disable VSAN if enabled=False
631            else:
632                response = __salt__[esxi_cmd]("vsan_disable").get(host)
633                error = response.get("Error")
634                if error:
635                    ret["comment"] = "Error: {}".format(error)
636                    return ret
637        ret["changes"].update(
638            {"enabled": {"old": current_vsan_enabled, "new": enabled}}
639        )
640
641    # Add any eligible disks to VSAN, if requested.
642    if add_disks_to_vsan:
643        current_eligible_disks = __salt__[esxi_cmd]("get_vsan_eligible_disks").get(host)
644        error = current_eligible_disks.get("Error")
645        if error:
646            ret["comment"] = "Error: {}".format(error)
647            return ret
648
649        disks = current_eligible_disks.get("Eligible")
650        if disks and isinstance(disks, list):
651            # Only run the command if not using test=True
652            if not __opts__["test"]:
653                response = __salt__[esxi_cmd]("vsan_add_disks").get(host)
654                error = response.get("Error")
655                if error:
656                    ret["comment"] = "Error: {}".format(error)
657                    return ret
658
659            ret["changes"].update({"add_disks_to_vsan": {"old": "", "new": disks}})
660
661    ret["result"] = True
662    if ret["changes"] == {}:
663        ret["comment"] = "VSAN configuration is already in the desired state."
664        return ret
665
666    if __opts__["test"]:
667        ret["result"] = None
668        ret["comment"] = "VSAN configuration will change."
669
670    return ret
671
672
673def ssh_configured(
674    name,
675    service_running,
676    ssh_key=None,
677    ssh_key_file=None,
678    service_policy=None,
679    service_restart=False,
680    certificate_verify=None,
681):
682    """
683    Manage the SSH configuration for a host including whether or not SSH is running or
684    the presence of a given SSH key. Note: Only one ssh key can be uploaded for root.
685    Uploading a second key will replace any existing key.
686
687    name
688        Name of the state.
689
690    service_running
691        Ensures whether or not the SSH service should be running on a host. Represented
692        as a boolean value where ``True`` indicates that SSH should be running and
693        ``False`` indicates that SSH should stopped.
694
695        In order to update SSH keys, the SSH service must be running.
696
697    ssh_key
698        Public SSH key to added to the authorized_keys file on the ESXi host. You can
699        use ``ssh_key`` or ``ssh_key_file``, but not both.
700
701    ssh_key_file
702        File containing the public SSH key to be added to the authorized_keys file on
703        the ESXi host. You can use ``ssh_key_file`` or ``ssh_key``, but not both.
704
705    service_policy
706        The policy to set for the NTP service.
707
708        .. note::
709
710            When setting the service policy to ``off`` or ``on``, you *must* quote the
711            setting. If you don't, the yaml parser will set the string to a boolean,
712            which will cause trouble checking for stateful changes and will error when
713            trying to set the policy on the ESXi host.
714
715    service_restart
716        If set to ``True``, the SSH service will be restarted, regardless of its
717        previous running state. Default is ``False``.
718
719    certificate_verify
720        If set to ``True``, the SSL connection must present a valid certificate.
721        Default is ``True``.
722
723    Example:
724
725    .. code-block:: yaml
726
727        configure-host-ssh:
728          esxi.ssh_configured:
729            - service_running: True
730            - ssh_key_file: /etc/salt/ssh_keys/my_key.pub
731            - service_policy: 'on'
732            - service_restart: True
733            - certificate_verify: True
734
735    """
736    if certificate_verify is None:
737        certificate_verify = True
738    ret = {"name": name, "result": False, "changes": {}, "comment": ""}
739    esxi_cmd = "esxi.cmd"
740    host = __pillar__["proxy"]["host"]
741    ssh = "ssh"
742
743    ssh_running = __salt__[esxi_cmd]("get_service_running", service_name=ssh).get(host)
744    error = ssh_running.get("Error")
745    if error:
746        ret["comment"] = "Error: {}".format(error)
747        return ret
748    ssh_running = ssh_running.get(ssh)
749
750    # Configure SSH service_running state, if changed.
751    if service_running != ssh_running:
752        # Only actually run the command if not using test=True
753        if not __opts__["test"]:
754            # Start SSH if service_running=True
755            if service_running is True:
756                enable = __salt__[esxi_cmd]("service_start", service_name=ssh).get(host)
757                error = enable.get("Error")
758                if error:
759                    ret["comment"] = "Error: {}".format(error)
760                    return ret
761            # Disable SSH if service_running=False
762            else:
763                disable = __salt__[esxi_cmd]("service_stop", service_name=ssh).get(host)
764                error = disable.get("Error")
765                if error:
766                    ret["comment"] = "Error: {}".format(error)
767                    return ret
768
769        ret["changes"].update(
770            {"service_running": {"old": ssh_running, "new": service_running}}
771        )
772
773    # If uploading an SSH key or SSH key file, see if there's a current
774    # SSH key and compare the current key to the key set in the state.
775    current_ssh_key, ssh_key_changed = None, False
776    if ssh_key or ssh_key_file:
777        current_ssh_key = __salt__[esxi_cmd](
778            "get_ssh_key", certificate_verify=certificate_verify
779        )
780        error = current_ssh_key.get("Error")
781        if error:
782            ret["comment"] = "Error: {}".format(error)
783            return ret
784        current_ssh_key = current_ssh_key.get("key")
785        if current_ssh_key:
786            clean_current_key = _strip_key(current_ssh_key).split(" ")
787            if not ssh_key:
788                ssh_key = ""
789                # Open ssh key file and read in contents to create one key string
790                with salt.utils.files.fopen(ssh_key_file, "r") as key_file:
791                    for line in key_file:
792                        if line.startswith("#"):
793                            # Commented line
794                            continue
795                        ssh_key = ssh_key + line
796
797            clean_ssh_key = _strip_key(ssh_key).split(" ")
798            # Check that the first two list items of clean key lists are equal.
799            if (
800                clean_current_key[0] != clean_ssh_key[0]
801                or clean_current_key[1] != clean_ssh_key[1]
802            ):
803                ssh_key_changed = True
804        else:
805            # If current_ssh_key is None, but we're setting a new key with
806            # either ssh_key or ssh_key_file, then we need to flag the change.
807            ssh_key_changed = True
808
809    # Upload SSH key, if changed.
810    if ssh_key_changed:
811        if not __opts__["test"]:
812            # Upload key
813            response = __salt__[esxi_cmd](
814                "upload_ssh_key",
815                ssh_key=ssh_key,
816                ssh_key_file=ssh_key_file,
817                certificate_verify=certificate_verify,
818            )
819            error = response.get("Error")
820            if error:
821                ret["comment"] = "Error: {}".format(error)
822                return ret
823        ret["changes"].update(
824            {
825                "SSH Key": {
826                    "old": current_ssh_key,
827                    "new": ssh_key if ssh_key else ssh_key_file,
828                }
829            }
830        )
831
832    # Configure service_policy
833    if service_policy:
834        current_service_policy = __salt__[esxi_cmd](
835            "get_service_policy", service_name=ssh
836        ).get(host)
837        error = current_service_policy.get("Error")
838        if error:
839            ret["comment"] = "Error: {}".format(error)
840            return ret
841        current_service_policy = current_service_policy.get(ssh)
842
843        if service_policy != current_service_policy:
844            # Only run the command if not using test=True
845            if not __opts__["test"]:
846                response = __salt__[esxi_cmd](
847                    "set_service_policy",
848                    service_name=ssh,
849                    service_policy=service_policy,
850                ).get(host)
851                error = response.get("Error")
852                if error:
853                    ret["comment"] = "Error: {}".format(error)
854                    return ret
855            ret["changes"].update(
856                {
857                    "service_policy": {
858                        "old": current_service_policy,
859                        "new": service_policy,
860                    }
861                }
862            )
863
864    # Restart ssh_service if service_restart=True
865    if service_restart:
866        # Only run the command if not using test=True
867        if not __opts__["test"]:
868            response = __salt__[esxi_cmd]("service_restart", service_name=ssh).get(host)
869            error = response.get("Error")
870            if error:
871                ret["comment"] = "Error: {}".format(error)
872                return ret
873        ret["changes"].update(
874            {"service_restart": {"old": "", "new": "SSH service restarted."}}
875        )
876
877    ret["result"] = True
878    if ret["changes"] == {}:
879        ret["comment"] = "SSH service is already in the desired state."
880        return ret
881
882    if __opts__["test"]:
883        ret["result"] = None
884        ret["comment"] = "SSH service state will change."
885
886    return ret
887
888
889def syslog_configured(
890    name,
891    syslog_configs,
892    firewall=True,
893    reset_service=True,
894    reset_syslog_config=False,
895    reset_configs=None,
896):
897    """
898    Ensures the specified syslog configuration parameters. By default,
899    this state will reset the syslog service after any new or changed
900    parameters are set successfully.
901
902    name
903        Name of the state.
904
905    syslog_configs
906        Name of parameter to set (corresponds to the command line switch for
907        esxcli without the double dashes (--))
908
909        Valid syslog_config values are ``logdir``, ``loghost``, ``logdir-unique``,
910        ``default-rotate``, ``default-size``, and ``default-timeout``.
911
912        Each syslog_config option also needs a configuration value to set.
913        For example, ``loghost`` requires URLs or IP addresses to use for
914        logging. Multiple log servers can be specified by listing them,
915        comma-separated, but without spaces before or after commas
916
917        (reference: https://blogs.vmware.com/vsphere/2012/04/configuring-multiple-syslog-servers-for-esxi-5.html)
918
919    firewall
920        Enable the firewall rule set for syslog. Defaults to ``True``.
921
922    reset_service
923        After a successful parameter set, reset the service. Defaults to ``True``.
924
925    reset_syslog_config
926        Resets the syslog service to its default settings. Defaults to ``False``.
927        If set to ``True``, default settings defined by the list of syslog configs
928        in ``reset_configs`` will be reset before running any other syslog settings.
929
930    reset_configs
931        A comma-delimited list of parameters to reset. Only runs if
932        ``reset_syslog_config`` is set to ``True``. If ``reset_syslog_config`` is set
933        to ``True``, but no syslog configs are listed in ``reset_configs``, then
934        ``reset_configs`` will be set to ``all`` by default.
935
936        See ``syslog_configs`` parameter above for a list of valid options.
937
938    Example:
939
940    .. code-block:: yaml
941
942        configure-host-syslog:
943          esxi.syslog_configured:
944            - syslog_configs:
945                loghost: ssl://localhost:5432,tcp://10.1.0.1:1514
946                default-timeout: 120
947            - firewall: True
948            - reset_service: True
949            - reset_syslog_config: True
950            - reset_configs: loghost,default-timeout
951    """
952    ret = {"name": name, "result": False, "changes": {}, "comment": ""}
953    esxi_cmd = "esxi.cmd"
954    host = __pillar__["proxy"]["host"]
955
956    if reset_syslog_config:
957        if not reset_configs:
958            reset_configs = "all"
959        # Only run the command if not using test=True
960        if not __opts__["test"]:
961            reset = __salt__[esxi_cmd](
962                "reset_syslog_config", syslog_config=reset_configs
963            ).get(host)
964            for key, val in reset.items():
965                if isinstance(val, bool):
966                    continue
967                if not val.get("success"):
968                    msg = val.get("message")
969                    if not msg:
970                        msg = (
971                            "There was an error resetting a syslog config '{}'."
972                            "Please check debug logs.".format(val)
973                        )
974                    ret["comment"] = "Error: {}".format(msg)
975                    return ret
976
977        ret["changes"].update(
978            {"reset_syslog_config": {"old": "", "new": reset_configs}}
979        )
980
981    current_firewall = __salt__[esxi_cmd]("get_firewall_status").get(host)
982    error = current_firewall.get("Error")
983    if error:
984        ret["comment"] = "Error: {}".format(error)
985        return ret
986
987    current_firewall = current_firewall.get("rulesets").get("syslog")
988    if current_firewall != firewall:
989        # Only run the command if not using test=True
990        if not __opts__["test"]:
991            enabled = __salt__[esxi_cmd](
992                "enable_firewall_ruleset",
993                ruleset_enable=firewall,
994                ruleset_name="syslog",
995            ).get(host)
996            if enabled.get("retcode") != 0:
997                err = enabled.get("stderr")
998                out = enabled.get("stdout")
999                ret["comment"] = "Error: {}".format(err if err else out)
1000                return ret
1001
1002        ret["changes"].update({"firewall": {"old": current_firewall, "new": firewall}})
1003
1004    current_syslog_config = __salt__[esxi_cmd]("get_syslog_config").get(host)
1005    for key, val in syslog_configs.items():
1006        # The output of get_syslog_config has different keys than the keys
1007        # Used to set syslog_config values. We need to look them up first.
1008        try:
1009            lookup_key = _lookup_syslog_config(key)
1010        except KeyError:
1011            ret["comment"] = "'{}' is not a valid config variable.".format(key)
1012            return ret
1013
1014        current_val = current_syslog_config[lookup_key]
1015        if str(current_val) != str(val):
1016            # Only run the command if not using test=True
1017            if not __opts__["test"]:
1018                response = __salt__[esxi_cmd](
1019                    "set_syslog_config",
1020                    syslog_config=key,
1021                    config_value=val,
1022                    firewall=firewall,
1023                    reset_service=reset_service,
1024                ).get(host)
1025                success = response.get(key).get("success")
1026                if not success:
1027                    msg = response.get(key).get("message")
1028                    if not msg:
1029                        msg = (
1030                            "There was an error setting syslog config '{}'. "
1031                            "Please check debug logs.".format(key)
1032                        )
1033                    ret["comment"] = msg
1034                    return ret
1035
1036            if not ret["changes"].get("syslog_config"):
1037                ret["changes"].update({"syslog_config": {}})
1038            ret["changes"]["syslog_config"].update(
1039                {key: {"old": current_val, "new": val}}
1040            )
1041
1042    ret["result"] = True
1043    if ret["changes"] == {}:
1044        ret["comment"] = "Syslog is already in the desired state."
1045        return ret
1046
1047    if __opts__["test"]:
1048        ret["result"] = None
1049        ret["comment"] = "Syslog state will change."
1050
1051    return ret
1052
1053
1054@depends(HAS_PYVMOMI)
1055@depends(HAS_JSONSCHEMA)
1056def diskgroups_configured(name, diskgroups, erase_disks=False):
1057    """
1058    Configures the disk groups to use for vsan.
1059
1060    This function will do the following:
1061
1062    1. Check whether or not all disks in the diskgroup spec exist, and raises
1063       and errors if they do not.
1064
1065    2. Create diskgroups with the correct disk configurations if diskgroup
1066       (identified by the cache disk canonical name) doesn't exist
1067
1068    3. Adds extra capacity disks to the existing diskgroup
1069
1070    Example:
1071
1072    .. code:: python
1073
1074        {
1075            'cache_scsi_addr': 'vmhba1:C0:T0:L0',
1076            'capacity_scsi_addrs': [
1077                'vmhba2:C0:T0:L0',
1078                'vmhba3:C0:T0:L0',
1079                'vmhba4:C0:T0:L0',
1080            ]
1081        }
1082
1083    name
1084        Mandatory state name
1085
1086    diskgroups
1087        Disk group representation containing scsi disk addresses.
1088        Scsi addresses are expected for disks in the diskgroup:
1089
1090    erase_disks
1091        Specifies whether to erase all partitions on all disks member of the
1092        disk group before the disk group is created. Default value is False.
1093    """
1094    proxy_details = __salt__["esxi.get_details"]()
1095    hostname = (
1096        proxy_details["host"]
1097        if not proxy_details.get("vcenter")
1098        else proxy_details["esxi_host"]
1099    )
1100    log.info("Running state %s for host '%s'", name, hostname)
1101    # Variable used to return the result of the invocation
1102    ret = {"name": name, "result": None, "changes": {}, "comments": None}
1103    # Signals if errors have been encountered
1104    errors = False
1105    # Signals if changes are required
1106    changes = False
1107    comments = []
1108    diskgroup_changes = {}
1109    si = None
1110    try:
1111        log.trace("Validating diskgroups_configured input")
1112        schema = DiskGroupsDiskScsiAddressSchema.serialize()
1113        try:
1114            jsonschema.validate(
1115                {"diskgroups": diskgroups, "erase_disks": erase_disks}, schema
1116            )
1117        except jsonschema.exceptions.ValidationError as exc:
1118            raise InvalidConfigError(exc)
1119        si = __salt__["vsphere.get_service_instance_via_proxy"]()
1120        host_disks = __salt__["vsphere.list_disks"](service_instance=si)
1121        if not host_disks:
1122            raise VMwareObjectRetrievalError(
1123                "No disks retrieved from host '{}'".format(hostname)
1124            )
1125        scsi_addr_to_disk_map = {d["scsi_address"]: d for d in host_disks}
1126        log.trace("scsi_addr_to_disk_map = %s", scsi_addr_to_disk_map)
1127        existing_diskgroups = __salt__["vsphere.list_diskgroups"](service_instance=si)
1128        cache_disk_to_existing_diskgroup_map = {
1129            dg["cache_disk"]: dg for dg in existing_diskgroups
1130        }
1131    except CommandExecutionError as err:
1132        log.error("Error: %s", err)
1133        if si:
1134            __salt__["vsphere.disconnect"](si)
1135        ret.update(
1136            {"result": False if not __opts__["test"] else None, "comment": str(err)}
1137        )
1138        return ret
1139
1140    # Iterate through all of the disk groups
1141    for idx, dg in enumerate(diskgroups):
1142        # Check for cache disk
1143        if not dg["cache_scsi_addr"] in scsi_addr_to_disk_map:
1144            comments.append(
1145                "No cache disk with scsi address '{}' was found.".format(
1146                    dg["cache_scsi_addr"]
1147                )
1148            )
1149            log.error(comments[-1])
1150            errors = True
1151            continue
1152
1153        # Check for capacity disks
1154        cache_disk_id = scsi_addr_to_disk_map[dg["cache_scsi_addr"]]["id"]
1155        cache_disk_display = "{} (id:{})".format(dg["cache_scsi_addr"], cache_disk_id)
1156        bad_scsi_addrs = []
1157        capacity_disk_ids = []
1158        capacity_disk_displays = []
1159        for scsi_addr in dg["capacity_scsi_addrs"]:
1160            if scsi_addr not in scsi_addr_to_disk_map:
1161                bad_scsi_addrs.append(scsi_addr)
1162                continue
1163            capacity_disk_ids.append(scsi_addr_to_disk_map[scsi_addr]["id"])
1164            capacity_disk_displays.append(
1165                "{} (id:{})".format(scsi_addr, capacity_disk_ids[-1])
1166            )
1167        if bad_scsi_addrs:
1168            comments.append(
1169                "Error in diskgroup #{}: capacity disks with scsi addresses {} "
1170                "were not found.".format(
1171                    idx, ", ".join(["'{}'".format(a) for a in bad_scsi_addrs])
1172                )
1173            )
1174            log.error(comments[-1])
1175            errors = True
1176            continue
1177
1178        if not cache_disk_to_existing_diskgroup_map.get(cache_disk_id):
1179            # A new diskgroup needs to be created
1180            log.trace("erase_disks = %s", erase_disks)
1181            if erase_disks:
1182                if __opts__["test"]:
1183                    comments.append(
1184                        "State {} will "
1185                        "erase all disks of disk group #{}; "
1186                        "cache disk: '{}', "
1187                        "capacity disk(s): {}."
1188                        "".format(
1189                            name,
1190                            idx,
1191                            cache_disk_display,
1192                            ", ".join(
1193                                ["'{}'".format(a) for a in capacity_disk_displays]
1194                            ),
1195                        )
1196                    )
1197                else:
1198                    # Erase disk group disks
1199                    for disk_id in [cache_disk_id] + capacity_disk_ids:
1200                        __salt__["vsphere.erase_disk_partitions"](
1201                            disk_id=disk_id, service_instance=si
1202                        )
1203                    comments.append(
1204                        "Erased disks of diskgroup #{}; "
1205                        "cache disk: '{}', capacity disk(s): "
1206                        "{}".format(
1207                            idx,
1208                            cache_disk_display,
1209                            ", ".join(
1210                                ["'{}'".format(a) for a in capacity_disk_displays]
1211                            ),
1212                        )
1213                    )
1214                    log.info(comments[-1])
1215
1216            if __opts__["test"]:
1217                comments.append(
1218                    "State {} will create "
1219                    "the disk group #{}; cache disk: '{}', "
1220                    "capacity disk(s): {}.".format(
1221                        name,
1222                        idx,
1223                        cache_disk_display,
1224                        ", ".join(["'{}'".format(a) for a in capacity_disk_displays]),
1225                    )
1226                )
1227                log.info(comments[-1])
1228                changes = True
1229                continue
1230            try:
1231                __salt__["vsphere.create_diskgroup"](
1232                    cache_disk_id,
1233                    capacity_disk_ids,
1234                    safety_checks=False,
1235                    service_instance=si,
1236                )
1237            except VMwareSaltError as err:
1238                comments.append("Error creating disk group #{}: {}.".format(idx, err))
1239                log.error(comments[-1])
1240                errors = True
1241                continue
1242
1243            comments.append("Created disk group #'{}'.".format(idx))
1244            log.info(comments[-1])
1245            diskgroup_changes[str(idx)] = {
1246                "new": {"cache": cache_disk_display, "capacity": capacity_disk_displays}
1247            }
1248            changes = True
1249            continue
1250
1251        # The diskgroup exists; checking the capacity disks
1252        log.debug(
1253            "Disk group #%s exists. Checking capacity disks: %s.",
1254            idx,
1255            capacity_disk_displays,
1256        )
1257        existing_diskgroup = cache_disk_to_existing_diskgroup_map.get(cache_disk_id)
1258        existing_capacity_disk_displays = [
1259            "{} (id:{})".format(
1260                [d["scsi_address"] for d in host_disks if d["id"] == disk_id][0],
1261                disk_id,
1262            )
1263            for disk_id in existing_diskgroup["capacity_disks"]
1264        ]
1265        # Populate added disks and removed disks and their displays
1266        added_capacity_disk_ids = []
1267        added_capacity_disk_displays = []
1268        removed_capacity_disk_ids = []
1269        removed_capacity_disk_displays = []
1270        for disk_id in capacity_disk_ids:
1271            if disk_id not in existing_diskgroup["capacity_disks"]:
1272                disk_scsi_addr = [
1273                    d["scsi_address"] for d in host_disks if d["id"] == disk_id
1274                ][0]
1275                added_capacity_disk_ids.append(disk_id)
1276                added_capacity_disk_displays.append(
1277                    "{} (id:{})".format(disk_scsi_addr, disk_id)
1278                )
1279        for disk_id in existing_diskgroup["capacity_disks"]:
1280            if disk_id not in capacity_disk_ids:
1281                disk_scsi_addr = [
1282                    d["scsi_address"] for d in host_disks if d["id"] == disk_id
1283                ][0]
1284                removed_capacity_disk_ids.append(disk_id)
1285                removed_capacity_disk_displays.append(
1286                    "{} (id:{})".format(disk_scsi_addr, disk_id)
1287                )
1288
1289        log.debug(
1290            "Disk group #%s: existing capacity disk ids: %s; added "
1291            "capacity disk ids: %s; removed capacity disk ids: %s",
1292            idx,
1293            existing_capacity_disk_displays,
1294            added_capacity_disk_displays,
1295            removed_capacity_disk_displays,
1296        )
1297
1298        # TODO revisit this when removing capacity disks is supported
1299        if removed_capacity_disk_ids:
1300            comments.append(
1301                "Error removing capacity disk(s) {} from disk group #{}; "
1302                "operation is not supported."
1303                "".format(
1304                    ", ".join(
1305                        ["'{}'".format(id) for id in removed_capacity_disk_displays]
1306                    ),
1307                    idx,
1308                )
1309            )
1310            log.error(comments[-1])
1311            errors = True
1312            continue
1313
1314        if added_capacity_disk_ids:
1315            # Capacity disks need to be added to disk group
1316
1317            # Building a string representation of the capacity disks
1318            # that need to be added
1319            s = ", ".join(["'{}'".format(id) for id in added_capacity_disk_displays])
1320            if __opts__["test"]:
1321                comments.append(
1322                    "State {} will add capacity disk(s) {} to disk group #{}.".format(
1323                        name, s, idx
1324                    )
1325                )
1326                log.info(comments[-1])
1327                changes = True
1328                continue
1329            try:
1330                __salt__["vsphere.add_capacity_to_diskgroup"](
1331                    cache_disk_id,
1332                    added_capacity_disk_ids,
1333                    safety_checks=False,
1334                    service_instance=si,
1335                )
1336            except VMwareSaltError as err:
1337                comments.append(
1338                    "Error adding capacity disk(s) {} to disk group #{}: {}.".format(
1339                        s, idx, err
1340                    )
1341                )
1342                log.error(comments[-1])
1343                errors = True
1344                continue
1345
1346            com = "Added capacity disk(s) {} to disk group #{}".format(s, idx)
1347            log.info(com)
1348            comments.append(com)
1349            diskgroup_changes[str(idx)] = {
1350                "new": {
1351                    "cache": cache_disk_display,
1352                    "capacity": capacity_disk_displays,
1353                },
1354                "old": {
1355                    "cache": cache_disk_display,
1356                    "capacity": existing_capacity_disk_displays,
1357                },
1358            }
1359            changes = True
1360            continue
1361
1362        # No capacity needs to be added
1363        s = "Disk group #{} is correctly configured. Nothing to be done.".format(idx)
1364        log.info(s)
1365        comments.append(s)
1366    __salt__["vsphere.disconnect"](si)
1367
1368    # Build the final return message
1369    result = (
1370        True
1371        if not (changes or errors)
1372        else None  # no changes/errors
1373        if __opts__["test"]
1374        else False  # running in test mode
1375        if errors
1376        else True
1377    )  # found errors; defaults to True
1378    ret.update(
1379        {"result": result, "comment": "\n".join(comments), "changes": diskgroup_changes}
1380    )
1381    return ret
1382
1383
1384@depends(HAS_PYVMOMI)
1385@depends(HAS_JSONSCHEMA)
1386def host_cache_configured(
1387    name,
1388    enabled,
1389    datastore,
1390    swap_size="100%",
1391    dedicated_backing_disk=False,
1392    erase_backing_disk=False,
1393):
1394    """
1395    Configures the host cache used for swapping.
1396
1397    It will do the following:
1398
1399    1. Checks if backing disk exists
1400
1401    2. Creates the VMFS datastore if doesn't exist (datastore partition will be
1402       created and use the entire disk)
1403
1404    3. Raises an error if ``dedicated_backing_disk`` is ``True`` and partitions
1405       already exist on the backing disk
1406
1407    4. Configures host_cache to use a portion of the datastore for caching
1408       (either a specific size or a percentage of the datastore)
1409
1410    Examples
1411
1412    Percentage swap size (can't be 100%)
1413
1414    .. code:: python
1415
1416        {
1417            'enabled': true,
1418            'datastore': {
1419                'backing_disk_scsi_addr': 'vmhba0:C0:T0:L0',
1420                'vmfs_version': 5,
1421                'name': 'hostcache'
1422                }
1423            'dedicated_backing_disk': false
1424            'swap_size': '98%',
1425        }
1426
1427    Fixed sized swap size
1428
1429    .. code:: python
1430
1431        {
1432            'enabled': true,
1433            'datastore': {
1434                'backing_disk_scsi_addr': 'vmhba0:C0:T0:L0',
1435                'vmfs_version': 5,
1436                'name': 'hostcache'
1437                }
1438            'dedicated_backing_disk': true
1439            'swap_size': '10GiB',
1440        }
1441
1442    name
1443        Mandatory state name.
1444
1445    enabled
1446        Specifies whether the host cache is enabled.
1447
1448    datastore
1449        Specifies the host cache datastore.
1450
1451    swap_size
1452        Specifies the size of the host cache swap. Can be a percentage or a
1453        value in GiB. Default value is ``100%``.
1454
1455    dedicated_backing_disk
1456        Specifies whether the backing disk is dedicated to the host cache which
1457        means it must have no other partitions. Default is False
1458
1459    erase_backing_disk
1460        Specifies whether to erase all partitions on the backing disk before
1461        the datastore is created. Default value is False.
1462    """
1463    log.trace("enabled = %s", enabled)
1464    log.trace("datastore = %s", datastore)
1465    log.trace("swap_size = %s", swap_size)
1466    log.trace("erase_backing_disk = %s", erase_backing_disk)
1467    # Variable used to return the result of the invocation
1468    proxy_details = __salt__["esxi.get_details"]()
1469    hostname = (
1470        proxy_details["host"]
1471        if not proxy_details.get("vcenter")
1472        else proxy_details["esxi_host"]
1473    )
1474    log.trace("hostname = %s", hostname)
1475    log.info("Running host_cache_swap_configured for host '%s'", hostname)
1476    ret = {
1477        "name": hostname,
1478        "comment": "Default comments",
1479        "result": None,
1480        "changes": {},
1481    }
1482    result = None if __opts__["test"] else True  # We assume success
1483    needs_setting = False
1484    comments = []
1485    changes = {}
1486    si = None
1487    try:
1488        log.debug("Validating host_cache_configured input")
1489        schema = HostCacheSchema.serialize()
1490        try:
1491            jsonschema.validate(
1492                {
1493                    "enabled": enabled,
1494                    "datastore": datastore,
1495                    "swap_size": swap_size,
1496                    "erase_backing_disk": erase_backing_disk,
1497                },
1498                schema,
1499            )
1500        except jsonschema.exceptions.ValidationError as exc:
1501            raise InvalidConfigError(exc)
1502        m = re.match(r"(\d+)(%|GiB)", swap_size)
1503        swap_size_value = int(m.group(1))
1504        swap_type = m.group(2)
1505        log.trace("swap_size_value = %s; swap_type = %s", swap_size_value, swap_type)
1506        si = __salt__["vsphere.get_service_instance_via_proxy"]()
1507        host_cache = __salt__["vsphere.get_host_cache"](service_instance=si)
1508
1509        # Check enabled
1510        if host_cache["enabled"] != enabled:
1511            changes.update({"enabled": {"old": host_cache["enabled"], "new": enabled}})
1512            needs_setting = True
1513
1514        # Check datastores
1515        existing_datastores = None
1516        if host_cache.get("datastore"):
1517            existing_datastores = __salt__["vsphere.list_datastores_via_proxy"](
1518                datastore_names=[datastore["name"]], service_instance=si
1519            )
1520        # Retrieve backing disks
1521        existing_disks = __salt__["vsphere.list_disks"](
1522            scsi_addresses=[datastore["backing_disk_scsi_addr"]], service_instance=si
1523        )
1524        if not existing_disks:
1525            raise VMwareObjectRetrievalError(
1526                "Disk with scsi address '{}' was not found in host '{}'".format(
1527                    datastore["backing_disk_scsi_addr"], hostname
1528                )
1529            )
1530        backing_disk = existing_disks[0]
1531        backing_disk_display = "{} (id:{})".format(
1532            backing_disk["scsi_address"], backing_disk["id"]
1533        )
1534        log.trace("backing_disk = %s", backing_disk_display)
1535
1536        existing_datastore = None
1537        if not existing_datastores:
1538            # Check if disk needs to be erased
1539            if erase_backing_disk:
1540                if __opts__["test"]:
1541                    comments.append(
1542                        "State {} will erase the backing disk '{}' on host '{}'.".format(
1543                            name, backing_disk_display, hostname
1544                        )
1545                    )
1546                    log.info(comments[-1])
1547                else:
1548                    # Erase disk
1549                    __salt__["vsphere.erase_disk_partitions"](
1550                        disk_id=backing_disk["id"], service_instance=si
1551                    )
1552                    comments.append(
1553                        "Erased backing disk '{}' on host '{}'.".format(
1554                            backing_disk_display, hostname
1555                        )
1556                    )
1557                    log.info(comments[-1])
1558            # Create the datastore
1559            if __opts__["test"]:
1560                comments.append(
1561                    "State {} will create the datastore '{}', with backing disk "
1562                    "'{}', on host '{}'.".format(
1563                        name, datastore["name"], backing_disk_display, hostname
1564                    )
1565                )
1566                log.info(comments[-1])
1567            else:
1568                if dedicated_backing_disk:
1569                    # Check backing disk doesn't already have partitions
1570                    partitions = __salt__["vsphere.list_disk_partitions"](
1571                        disk_id=backing_disk["id"], service_instance=si
1572                    )
1573                    log.trace("partitions = %s", partitions)
1574                    # We will ignore the mbr partitions
1575                    non_mbr_partitions = [p for p in partitions if p["format"] != "mbr"]
1576                    if len(non_mbr_partitions) > 0:
1577                        raise VMwareApiError(
1578                            "Backing disk '{}' has unexpected partitions".format(
1579                                backing_disk_display
1580                            )
1581                        )
1582                __salt__["vsphere.create_vmfs_datastore"](
1583                    datastore["name"],
1584                    existing_disks[0]["id"],
1585                    datastore["vmfs_version"],
1586                    service_instance=si,
1587                )
1588                comments.append(
1589                    "Created vmfs datastore '{}', backed by "
1590                    "disk '{}', on host '{}'.".format(
1591                        datastore["name"], backing_disk_display, hostname
1592                    )
1593                )
1594                log.info(comments[-1])
1595                changes.update(
1596                    {
1597                        "datastore": {
1598                            "new": {
1599                                "name": datastore["name"],
1600                                "backing_disk": backing_disk_display,
1601                            }
1602                        }
1603                    }
1604                )
1605                existing_datastore = __salt__["vsphere.list_datastores_via_proxy"](
1606                    datastore_names=[datastore["name"]], service_instance=si
1607                )[0]
1608            needs_setting = True
1609        else:
1610            # Check datastore is backed by the correct disk
1611            if not existing_datastores[0].get("backing_disk_ids"):
1612                raise VMwareSaltError(
1613                    "Datastore '{}' doesn't have a backing disk".format(
1614                        datastore["name"]
1615                    )
1616                )
1617            if backing_disk["id"] not in existing_datastores[0]["backing_disk_ids"]:
1618
1619                raise VMwareSaltError(
1620                    "Datastore '{}' is not backed by the correct disk: "
1621                    "expected '{}'; got {}".format(
1622                        datastore["name"],
1623                        backing_disk["id"],
1624                        ", ".join(
1625                            [
1626                                "'{}'".format(disk)
1627                                for disk in existing_datastores[0]["backing_disk_ids"]
1628                            ]
1629                        ),
1630                    )
1631                )
1632
1633            comments.append(
1634                "Datastore '{}' already exists on host '{}' "
1635                "and is backed by disk '{}'. Nothing to be "
1636                "done.".format(datastore["name"], hostname, backing_disk_display)
1637            )
1638            existing_datastore = existing_datastores[0]
1639            log.trace("existing_datastore = %s", existing_datastore)
1640            log.info(comments[-1])
1641
1642        if existing_datastore:
1643            # The following comparisons can be done if the existing_datastore
1644            # is set; it may not be set if running in test mode
1645            #
1646            # We support percent, as well as MiB, we will convert the size
1647            # to MiB, multiples of 1024 (VMware SDK limitation)
1648            if swap_type == "%":
1649                # Percentage swap size
1650                # Convert from bytes to MiB
1651                raw_size_MiB = (swap_size_value / 100.0) * (
1652                    existing_datastore["capacity"] / 1024 / 1024
1653                )
1654            else:
1655                raw_size_MiB = swap_size_value * 1024
1656            log.trace("raw_size = %sMiB", raw_size_MiB)
1657            swap_size_MiB = int(raw_size_MiB / 1024) * 1024
1658            log.trace("adjusted swap_size = %sMiB", swap_size_MiB)
1659            existing_swap_size_MiB = 0
1660            m = (
1661                re.match(r"(\d+)MiB", host_cache.get("swap_size"))
1662                if host_cache.get("swap_size")
1663                else None
1664            )
1665            if m:
1666                # if swap_size from the host is set and has an expected value
1667                # we are going to parse it to get the number of MiBs
1668                existing_swap_size_MiB = int(m.group(1))
1669            if not existing_swap_size_MiB == swap_size_MiB:
1670                needs_setting = True
1671                changes.update(
1672                    {
1673                        "swap_size": {
1674                            "old": "{}GiB".format(existing_swap_size_MiB / 1024),
1675                            "new": "{}GiB".format(swap_size_MiB / 1024),
1676                        }
1677                    }
1678                )
1679
1680        if needs_setting:
1681            if __opts__["test"]:
1682                comments.append(
1683                    "State {} will configure the host cache on host '{}' to: {}.".format(
1684                        name,
1685                        hostname,
1686                        {
1687                            "enabled": enabled,
1688                            "datastore_name": datastore["name"],
1689                            "swap_size": swap_size,
1690                        },
1691                    )
1692                )
1693            else:
1694                if (existing_datastore["capacity"] / 1024.0 ** 2) < swap_size_MiB:
1695
1696                    raise ArgumentValueError(
1697                        "Capacity of host cache datastore '{}' ({} MiB) is "
1698                        "smaller than the required swap size ({} MiB)".format(
1699                            existing_datastore["name"],
1700                            existing_datastore["capacity"] / 1024.0 ** 2,
1701                            swap_size_MiB,
1702                        )
1703                    )
1704                __salt__["vsphere.configure_host_cache"](
1705                    enabled,
1706                    datastore["name"],
1707                    swap_size_MiB=swap_size_MiB,
1708                    service_instance=si,
1709                )
1710                comments.append("Host cache configured on host '{}'.".format(hostname))
1711        else:
1712            comments.append(
1713                "Host cache on host '{}' is already correctly "
1714                "configured. Nothing to be done.".format(hostname)
1715            )
1716            result = True
1717        __salt__["vsphere.disconnect"](si)
1718        log.info(comments[-1])
1719        ret.update(
1720            {"comment": "\n".join(comments), "result": result, "changes": changes}
1721        )
1722        return ret
1723    except CommandExecutionError as err:
1724        log.error("Error: %s.", err)
1725        if si:
1726            __salt__["vsphere.disconnect"](si)
1727        ret.update(
1728            {
1729                "result": False if not __opts__["test"] else None,
1730                "comment": "{}.".format(err),
1731            }
1732        )
1733        return ret
1734
1735
1736def _lookup_syslog_config(config):
1737    """
1738    Helper function that looks up syslog_config keys available from
1739    ``vsphere.get_syslog_config``.
1740    """
1741    lookup = {
1742        "default-timeout": "Default Network Retry Timeout",
1743        "logdir": "Local Log Output",
1744        "default-size": "Local Logging Default Rotation Size",
1745        "logdir-unique": "Log To Unique Subdirectory",
1746        "default-rotate": "Local Logging Default Rotations",
1747        "loghost": "Remote Host",
1748    }
1749
1750    return lookup.get(config)
1751
1752
1753def _strip_key(key_string):
1754    """
1755    Strips an SSH key string of white space and line endings and returns the new string.
1756
1757    key_string
1758        The string to be stripped.
1759    """
1760    key_string.strip()
1761    key_string.replace("\n", "")
1762    key_string.replace("\r\n", "")
1763    return key_string
1764