Home
last modified time | relevance | path

Searched refs:VMs (Results 1 – 25 of 1769) sorted by relevance

12345678910>>...71

/dports/security/vault/vault-1.8.2/vendor/github.com/hashicorp/vic/tests/test-cases/Group25-Host-Affinity/
H A D25-02-Reconfigure.robot30 Verify Group Contains VMs %{VCH-NAME} 1
34 Verify Group Contains VMs %{VCH-NAME} 1
38 Verify Group Contains VMs %{VCH-NAME} 4
59 Enabling affinity affects existing container VMs
72 Verify Group Contains VMs %{VCH-NAME} 4
75 Enabling affinity affects subsequent container VMs
84 Verify Group Contains VMs %{VCH-NAME} 1
88 Verify Group Contains VMs %{VCH-NAME} 4
91 Disabling affinity affects existing container VMs
98 Verify Group Contains VMs %{VCH-NAME} 1
[all …]
H A D25-02-Reconfigure.md11 This suite requires a vCenter Server environment where VCHs can be deployed and container VMs creat…
26 7. Verify that the container VMs were added to the DRS VM Group.
47 ### 3. Enabling affinity affects existing container VMs
56 7. Verify that a DRS VM Group was created and that the endpoint VM and container VMs were added to …
62 ### 4. Enabling affinity affects subsequent container VMs
71 7. Verify that the container VMs were added to the DRS VM Group.
74 * Reconfiguring a VCH to enable use of a DRS VM Group affects subsequent container VMs operations.
77 ### 5. Disabling affinity affects existing container VMs
84 5. Verify that the container VMs were added to the DRS VM Group.
89 * Reconfiguring a VCH to disable use of a DRS VM Group affects existing container VMs.
[all …]
H A D25-01-Basic.robot25 Creating a VCH creates a VM group and container VMs get added to it
30 Verify Group Contains VMs %{VCH-NAME} 1
34 Verify Group Contains VMs %{VCH-NAME} 4
44 Verify Group Contains VMs %{VCH-NAME} 1
58 Verify Group Contains VMs %{VCH-NAME} 4
62 Verify Group Contains VMs %{VCH-NAME} 1
104 Verify Group Contains VMs %{VCH-NAME} 1
/dports/sysutils/py-salt/salt-3004.1/doc/ref/cli/
H A Dsalt-cloud.rst67 the VMs specified in the map file are created. If the --hard option is
68 set, then any VMs that exist on configured cloud providers that are
74 Pass in the name(s) of VMs to destroy, salt-cloud will search the
78 of VMs to be deleted.
84 large groups of VMs to be build at once.
180 To create 4 VMs named web1, web2, db1, and db2 from specified profiles:
186 To read in a map file and create all VMs specified therein:
192 To read in a map file and create all VMs specified therein in parallel:
198 To delete any VMs specified in the map file:
204 To delete any VMs NOT specified in the map file:
[all …]
/dports/sysutils/ansible/ansible-4.7.0/ansible_collections/ovirt/ovirt/roles/disaster_recovery/tasks/
H A Dunregister_entities.yml9 # Get all the running VMs and shut them down
10 - name: Fetch running VMs in the setup
16 - name: Check whether file with running VMs info exists
21 - name: Fetch all data of running VMs from file, if exists.
49 - name: If no file exists which contains data of unregistered VMs, set the file with running VMs
H A Drecover_engine.yml60 # all the templates/VMs/Disks
167 # VMs which are based on them
175 # Register all the unregistered VMs after we registered
177 - name: Register VMs
184 # Run all the high availability VMs.
185 - name: Run highly available VMs
192 # Run all the rest of the VMs.
193 - name: Run the rest of the VMs
/dports/sysutils/ansible/ansible-4.7.0/ansible_collections/ovirt/ovirt/roles/disaster_recovery/tasks/recover/
H A Dregister_vms.yml2 - name: Fetch unregistered VMs from storage domain
10 - name: Set unregistered VMs
14 # TODO: We should filter out VMs which already exist in the setup (diskless VMs)
H A Dreport_log_template.j22 The following VMs registered successfully: {{ succeed_vm_names | unique | join (", ") }}
5 The following VMs failed to be registered: {{ failed_vm_names | unique | join (", ") }}
14 The following VMs started successfully: {{ succeed_to_run_vms | unique | join (", ") }}
17 The following VMs failed to run: {{ failed_to_run_vms | unique | join (", ") }}
/dports/lang/solidity/solidity_0.8.11/test/libsolidity/syntaxTests/inlineAssembly/
H A Devm_constantinople_on_byzantium.sol23 …06): The "shl" instruction is only available for Constantinople-compatible VMs (you are currently …
25 …39): The "shr" instruction is only available for Constantinople-compatible VMs (you are currently …
27 …70): The "sar" instruction is only available for Constantinople-compatible VMs (you are currently …
29 … The "create2" instruction is only available for Constantinople-compatible VMs (you are currently …
31 … "extcodehash" instruction is only available for Constantinople-compatible VMs (you are currently …
H A Devm_byzantium_on_homestead.sol17 …he "returndatasize" instruction is only available for Byzantium-compatible VMs (you are currently …
19 …he "returndatacopy" instruction is only available for Byzantium-compatible VMs (you are currently …
20 …): The "staticcall" instruction is only available for Byzantium-compatible VMs (you are currently …
/dports/sysutils/ansible2/ansible-2.9.27/test/integration/targets/vmware_guest/tasks/
H A Dboot_firmware_d1_c1_f0.yml5 - name: create new VMs with boot_firmware as 'bios'
33 # VCSIM does not recognizes existing VMs boot firmware
36 - name: create new VMs again with boot_firmware as 'bios'
62 - name: create new VMs with boot_firmware as 'efi'
90 # VCSIM does not recognizes existing VMs boot firmware
93 - name: create new VMs again with boot_firmware as 'efi'
H A Dnetwork_negative_test.yml7 - name: create new VMs with non-existent network
38 - name: create new VMs with network and with only IP
71 - name: create new VMs with network and with only netmask
104 - name: create new VMs with network and without network name
137 - name: create new VMs with network and without network name
170 - name: create new VMs with invalid device type
204 - name: create new VMs with invalid device MAC address
239 - name: create new VMs with invalid network type
275 - name: create new VMs with IP, netmask and network type as "DHCP"
311 - name: create new VMs with no network type which set network type as "DHCP"
H A Dmem_reservation.yml8 name: Create new VMs with mem_reservation as 0
37 name: Again create new VMs with mem_reservation as 0
77 name: Again create new VMs with memory_reservation as 0
88 name: Create new VMs without memory_reservation or mem_reservation
117 name: Again create new VMs without memory_reservation or mem_reservation
/dports/sysutils/ansible/ansible-4.7.0/ansible_collections/community/vmware/tests/integration/targets/vmware_guest/tasks/
H A Dboot_firmware_d1_c1_f0.yml5 - name: create new VMs with boot_firmware as 'bios'
33 # VCSIM does not recognizes existing VMs boot firmware
36 - name: create new VMs again with boot_firmware as 'bios'
62 - name: create new VMs with boot_firmware as 'efi'
90 # VCSIM does not recognizes existing VMs boot firmware
93 - name: create new VMs again with boot_firmware as 'efi'
H A Dnetwork_negative_test.yml7 - name: create new VMs with non-existent network
38 - name: create new VMs with network and with only IP
71 - name: create new VMs with network and with only netmask
104 - name: create new VMs with network and without network name
137 - name: create new VMs with network and without network name
170 - name: create new VMs with invalid device type
204 - name: create new VMs with invalid device MAC address
239 - name: create new VMs with invalid network type
275 - name: create new VMs with IP, netmask and network type as "DHCP"
311 - name: create new VMs with no network type which set network type as "DHCP"
H A Dmem_reservation.yml8 name: Create new VMs with mem_reservation as 0
37 name: Again create new VMs with mem_reservation as 0
77 name: Again create new VMs with memory_reservation as 0
88 name: Create new VMs without memory_reservation or mem_reservation
117 name: Again create new VMs without memory_reservation or mem_reservation
H A Dcreate_d1_c1_f0.yml5 - name: create new VMs
76 - name: modify the new VMs
100 - name: re-modify the new VMs
124 - name: delete the new VMs
145 - name: re-delete the new VMs
/dports/sysutils/py-salt/salt-3004.1/doc/topics/cloud/
H A Dqs.rst108 Create VMs
110 VMs are created by calling ``salt-cloud`` with the following options:
122 Destroy VMs
130 Query VMs
132 You can view details about the VMs you've created using ``--query``:
140 Now that you know how to create and destoy individual VMs, next you should
141 learn how to use a cloud map to create a number of VMs at once.
144 any number of VMs. On subsequent runs, any VMs that do not exist are created,
145 and VMs that are already configured are left unmodified.
/dports/net-mgmt/p5-FusionInventory-Agent/FusionInventory-Agent-2.5.2/resources/virtualization/vboxmanage/
H A Dsample34 Config file: /home/normation/VirtualBox VMs/Node 1 : Debian/Node 1 : Debian.vbox
5 Snapshot folder: /home/normation/VirtualBox VMs/Node 1 : Debian/Snapshots
6 Log folder: /home/normation/VirtualBox VMs/Node 1 : Debian/Logs
88 Snapshot folder: /home/normation/VirtualBox VMs/Node 2 : Debian/Snapshots
89 Log folder: /home/normation/VirtualBox VMs/Node 2 : Debian/Logs
171 Snapshot folder: /home/normation/VirtualBox VMs/Node 3 : CentOS/Snapshots
172 Log folder: /home/normation/VirtualBox VMs/Node 3 : CentOS/Logs
254 Snapshot folder: /home/normation/VirtualBox VMs/Node 4 : CentOS/Snapshots
255 Log folder: /home/normation/VirtualBox VMs/Node 4 : CentOS/Logs
337 Snapshot folder: /home/normation/VirtualBox VMs/FormationCFengine/Snapshots
[all …]
/dports/net/google-cloud-sdk/google-cloud-sdk/lib/googlecloudsdk/schemas/compute/alpha/
H A DBackendServiceFailoverPolicy.yaml37 all backup backend VMs are unhealthy. If set to false, connections are
38 distributed among all primary VMs when all primary and all backup backend
39 VMs are unhealthy. The default is false.
45 balancer performs a failover when the number of healthy primary VMs equals
47 total number of healthy primary VMs is less than this ratio.
/dports/net/google-cloud-sdk/google-cloud-sdk/lib/googlecloudsdk/schemas/compute/v1/
H A DBackendServiceFailoverPolicy.yaml37 all backup backend VMs are unhealthy. If set to false, connections are
38 distributed among all primary VMs when all primary and all backup backend
39 VMs are unhealthy. The default is false.
45 balancer performs a failover when the number of healthy primary VMs equals
47 total number of healthy primary VMs is less than this ratio.
/dports/net/google-cloud-sdk/google-cloud-sdk/lib/googlecloudsdk/schemas/compute/beta/
H A DBackendServiceFailoverPolicy.yaml37 all backup backend VMs are unhealthy. If set to false, connections are
38 distributed among all primary VMs when all primary and all backup backend
39 VMs are unhealthy. The default is false.
45 balancer performs a failover when the number of healthy primary VMs equals
47 total number of healthy primary VMs is less than this ratio.
/dports/lang/solidity/solidity_0.8.11/test/libsolidity/syntaxTests/functionCalls/
H A Dnew_with_calloptions_unsupported.sol13 // TypeError 5189: (90-116): Unsupported call option "salt" (requires Constantinople-compatible VMs
14 …TypeError 5189: (120-137): Unsupported call option "salt" (requires Constantinople-compatible VMs).
15 …TypeError 5189: (161-181): Unsupported call option "salt" (requires Constantinople-compatible VMs).
/dports/sysutils/ansible/ansible-4.7.0/ansible_collections/ovirt/ovirt/roles/disaster_recovery/tasks/clean/
H A Dshutdown_vms.yml2 # Get all the running VMs related to a storage domain and shut them down
3 - name: Fetch VMs in the storage domain
13 - name: Shutdown VMs
/dports/security/cowrie/cowrie-2.2.0/docs/
H A DBACKEND_POOL.rst5 Cowrie's proxy. The pool keeps a set of VMs running at all times, ensuring different
8 VMs in the pool have their networking capabilities restricted by default: some attacks
13 The VMs in the backend pool, and all infrastructure (snapshots, networking and filtering)
49 After a while, VMs will start to boot and, when ready to be used, a message of the form
61 * **pool_max_vms**: the number of VMs to be kept running in the pool
96 Recycling VMs
100 VMs reach an inconsistent state or become unreliable. To counter that, and ensure
101 fresh VMs are ready constantly, we use the `recycle_period` to periodically
109 snapshot. If you want to analyse snapshots and see any changes made in the VMs, set
135 * **guest_image_path**: the base image upon which all VMs are created from
[all …]

12345678910>>...71