1# Configuration <a id="configuration"></a> 2 3The Icinga [configuration](https://icinga.com/products/configuration/) 4can be easily managed with either the [Icinga Director](https://icinga.com/docs/director/latest/), 5config management tools or plain text within the [Icinga DSL](04-configuration.md#configuration). 6 7Before looking into web based configuration or any sort of automation, 8we recommend to start with the configuration files and fully understand 9the possibilities of the Icinga DSL (Domain Specific Language). 10 11The package installation provides example configuration which already 12monitors the local Icinga server. You can view the monitoring details 13in Icinga Web. 14 15![Icinga Web Local Server](images/configuration/icinga_web_local_server.png) 16 17The [Language Reference](17-language-reference.md#language-reference) chapter explains details 18on value types (string, number, dictionaries, etc.) and the general configuration syntax. 19 20## Configuration Best Practice <a id="configuration-best-practice"></a> 21 22If you are ready to configure additional hosts, services, notifications, 23dependencies, etc., you should think about the requirements first and then 24decide for a possible strategy. 25 26There are many ways of creating Icinga 2 configuration objects: 27 28* The [Icinga Director](https://icinga.com/docs/director/latest/) as web based and/or automation configuration interface 29 * [Monitoring Automation with Icinga - The Director](https://icinga.com/2019/04/23/monitoring-automation-with-icinga-the-director/) 30* Manually with your preferred editor, for example vi(m), nano, notepad, etc. 31* Generated by a [configuration management tool](13-addons.md#configuration-tools) such as Puppet, Chef, Ansible, etc. 32* A custom exporter script from your CMDB or inventory tool 33* etc. 34 35Find the best strategy for your own configuration and ask yourself the following questions: 36 37* Do your hosts share a common group of services (for example linux hosts with disk, load, etc. checks)? 38* Only a small set of users receives notifications and escalations for all hosts/services? 39 40If you can at least answer one of these questions with yes, look for the 41[apply rules](03-monitoring-basics.md#using-apply) logic instead of defining objects on a per 42host and service basis. 43 44* You are required to define specific configuration for each host/service? 45* Does your configuration generation tool already know about the host-service-relationship? 46 47Then you should look for the object specific configuration setting `host_name` etc. accordingly. 48 49You decide on the "best" layout for configuration files and directories. Ensure that 50the [icinga2.conf](04-configuration.md#icinga2-conf) configuration file includes them. 51 52Consider these ideas: 53 54* tree-based on locations, host groups, specific host attributes with sub levels of directories. 55* flat `hosts.conf`, `services.conf`, etc. files for rule based configuration. 56* generated configuration with one file per host and a global configuration for groups, users, etc. 57* one big file generated from an external application (probably a bad idea for maintaining changes). 58* your own. 59 60In either way of choosing the right strategy you should additionally check the following: 61 62* Are there any specific attributes describing the host/service you could set as `vars` custom variables? 63You can later use them for applying assign/ignore rules, or export them into external interfaces. 64* Put hosts into hostgroups, services into servicegroups and use these attributes for your apply rules. 65* Use templates to store generic attributes for your objects and apply rules making your configuration more readable. 66Details can be found in the [using templates](03-monitoring-basics.md#object-inheritance-using-templates) chapter. 67* Apply rules may overlap. Keep a central place (for example, [services.conf](04-configuration.md#services-conf) or [notifications.conf](04-configuration.md#notifications-conf)) storing 68the configuration instead of defining apply rules deep in your configuration tree. 69* Every plugin used as check, notification or event command requires a `Command` definition. 70Further details can be looked up in the [check commands](03-monitoring-basics.md#check-commands) chapter. 71 72If you are planning to use a distributed monitoring setup with master, satellite and client installations 73take the configuration location into account too. Everything configured on the master, synced to all other 74nodes? Or any specific local configuration (e.g. health checks)? 75 76There is a detailed chapter on [distributed monitoring scenarios](06-distributed-monitoring.md#distributed-monitoring-scenarios). 77Please ensure to have read the [introduction](06-distributed-monitoring.md#distributed-monitoring) at first glance. 78 79If you happen to have further questions, do not hesitate to join the 80[community forum](https://community.icinga.com) 81and ask community members for their experience and best practices. 82 83## Your Configuration <a id="your-configuration"></a> 84 85If you prefer to organize your own local object tree, you can also remove 86`include_recursive "conf.d"` from your icinga2.conf file. 87 88Create a new configuration directory, e.g. `objects.d` and include it 89in your icinga2.conf file. 90 91``` 92[root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/objects.d 93 94[root@icinga2-master1.localdomain /]# vim /etc/icinga2/icinga2.conf 95 96/* Local object configuration on our master instance. */ 97include_recursive "objects.d" 98``` 99 100This approach is used by the [Icinga 2 Puppet module](https://icinga.com/products/integrations/puppet/). 101 102If you plan to setup a distributed setup with HA clusters and clients, please refer to [this chapter](#06-distributed-monitoring.md#distributed-monitoring-top-down) 103for examples with `zones.d` as configuration directory. 104 105## Configuration Overview <a id="configuring-icinga2-overview"></a> 106 107### icinga2.conf <a id="icinga2-conf"></a> 108 109An example configuration file is installed for you in `/etc/icinga2/icinga2.conf`. 110 111Here's a brief description of the example configuration: 112 113``` 114/** 115* Icinga 2 configuration file 116* -- this is where you define settings for the Icinga application including 117* which hosts/services to check. 118* 119* For an overview of all available configuration options please refer 120* to the documentation that is distributed as part of Icinga 2. 121*/ 122``` 123 124Icinga 2 supports [C/C++-style comments](17-language-reference.md#comments). 125 126/** 127* The constants.conf defines global constants. 128*/ 129include "constants.conf" 130 131The `include` directive can be used to include other files. 132 133``` 134/** 135* The zones.conf defines zones for a cluster setup. 136* Not required for single instance setups. 137*/ 138include "zones.conf" 139``` 140 141The [Icinga Template Library](10-icinga-template-library.md#icinga-template-library) provides a set of common templates 142and [CheckCommand](03-monitoring-basics.md#check-commands) definitions. 143 144``` 145/** 146* The Icinga Template Library (ITL) provides a number of useful templates 147* and command definitions. 148* Common monitoring plugin command definitions are included separately. 149*/ 150include <itl> 151include <plugins> 152include <plugins-contrib> 153include <manubulon> 154 155/** 156* This includes the Icinga 2 Windows plugins. These command definitions 157* are required on a master node when a client is used as command endpoint. 158*/ 159include <windows-plugins> 160 161/** 162* This includes the NSClient++ check commands. These command definitions 163* are required on a master node when a client is used as command endpoint. 164*/ 165include <nscp> 166 167/** 168* The features-available directory contains a number of configuration 169* files for features which can be enabled and disabled using the 170* icinga2 feature enable / icinga2 feature disable CLI commands. 171* These commands work by creating and removing symbolic links in 172* the features-enabled directory. 173*/ 174include "features-enabled/*.conf" 175``` 176 177This `include` directive takes care of including the configuration files for all 178the features which have been enabled with `icinga2 feature enable`. See 179[Enabling/Disabling Features](11-cli-commands.md#enable-features) for more details. 180 181``` 182/** 183* Although in theory you could define all your objects in this file 184* the preferred way is to create separate directories and files in the conf.d 185* directory. Each of these files must have the file extension ".conf". 186*/ 187include_recursive "conf.d" 188``` 189 190You can put your own configuration files in the [conf.d](04-configuration.md#conf-d) directory. This 191directive makes sure that all of your own configuration files are included. 192 193### constants.conf <a id="constants-conf"></a> 194 195The `constants.conf` configuration file can be used to define global constants. 196 197By default, you need to make sure to set these constants: 198 199* The `PluginDir` constant must be set to the path where the [Monitoring Project plugins](02-installation.md#setting-up-check-plugins) are installed. 200This constant is used by a number of 201[built-in check command definitions](10-icinga-template-library.md#icinga-template-library). 202* The `NodeName` constant defines your local node name. Should be set to FQDN which is the default 203if not set. This constant is required for local host configuration, monitoring remote clients and 204cluster setup. 205 206Example: 207 208``` 209/* The directory which contains the plugins from the Monitoring Plugins project. */ 210const PluginDir = "/usr/lib64/nagios/plugins" 211 212/* The directory which contains the Manubulon plugins. 213* Check the documentation, chapter "SNMP Manubulon Plugin Check Commands", for details. 214*/ 215const ManubulonPluginDir = "/usr/lib64/nagios/plugins" 216 217/* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`. 218* This should be the common name from the API certificate. 219*/ 220//const NodeName = "localhost" 221 222/* Our local zone name. */ 223const ZoneName = NodeName 224 225/* Secret key for remote node tickets */ 226const TicketSalt = "" 227``` 228 229The `ZoneName` and `TicketSalt` constants are required for remote client 230and distributed setups. The `node setup/wizard` CLI tools take care of 231populating these values. 232 233### zones.conf <a id="zones-conf"></a> 234 235This file can be used to specify the required [Zone](09-object-types.md#objecttype-zone) 236and [Endpoint](09-object-types.md#objecttype-endpoint) configuration object for 237[distributed monitoring](06-distributed-monitoring.md#distributed-monitoring). 238 239By default the `NodeName` and `ZoneName` [constants](04-configuration.md#constants-conf) will be used. 240 241It also contains several [global zones](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync) 242for distributed monitoring environments. 243 244Please ensure to modify this configuration with real names i.e. use the FQDN 245mentioned in [this chapter](06-distributed-monitoring.md#distributed-monitoring-conventions) 246for your `Zone` and `Endpoint` object names. 247 248### The conf.d Directory <a id="conf-d"></a> 249 250This directory contains **example configuration** which should help you get started 251with monitoring the local host and its services. It is included in the 252[icinga2.conf](04-configuration.md#icinga2-conf) configuration file by default. 253 254It can be used as reference example for your own configuration strategy. 255Just keep in mind to include the main directories in the 256[icinga2.conf](04-configuration.md#icinga2-conf) file. 257 258> **Note** 259> 260> You can remove the include directive in [icinga2.conf](04-configuration.md#icinga2-conf) 261> if you prefer your own way of deploying Icinga 2 configuration. 262 263Further details on configuration best practice and how to build your 264own strategy is described in [this chapter](04-configuration.md#configuration-best-practice). 265 266Available configuration files which are installed by default: 267 268* [hosts.conf](04-configuration.md#hosts-conf) 269* [services.conf](04-configuration.md#services-conf) 270* [users.conf](04-configuration.md#users-conf) 271* [notifications.conf](04-configuration.md#notifications-conf) 272* [commands.conf](04-configuration.md#commands-conf) 273* [groups.conf](04-configuration.md#groups-conf) 274* [templates.conf](04-configuration.md#templates-conf) 275* [downtimes.conf](04-configuration.md#downtimes-conf) 276* [timeperiods.conf](04-configuration.md#timeperiods-conf) 277* [api-users.conf](04-configuration.md#api-users-conf) 278* [app.conf](04-configuration.md#app-conf) 279 280#### hosts.conf <a id="hosts-conf"></a> 281 282The `hosts.conf` file contains an example host based on your 283`NodeName` setting in [constants.conf](04-configuration.md#constants-conf). You 284can use global constants for your object names instead of string 285values. 286 287The `import` keyword is used to import the `generic-host` template which 288takes care of setting up the host check command to `hostalive`. If you 289require a different check command, you can override it in the object definition. 290 291The `vars` attribute can be used to define custom variables which are available 292for check and notification commands. Most of the [Plugin Check Commands](10-icinga-template-library.md#icinga-template-library) 293in the Icinga Template Library require an `address` attribute. 294 295The custom variable `os` is evaluated by the `linux-servers` group in 296[groups.conf](04-configuration.md#groups-conf) making the local host a member. 297 298The example host will show you how to: 299 300* define http vhost attributes for the `http` service apply rule defined 301in [services.conf](04-configuration.md#services-conf). 302* define disks (all, specific `/`) and their attributes for the `disk` 303service apply rule defined in [services.conf](04-configuration.md#services-conf). 304* define notification types (`mail`) and set the groups attribute. This 305will be used by notification apply rules in [notifications.conf](04-configuration.md#notifications-conf). 306 307If you've installed [Icinga Web 2](02-installation.md#setting-up-icingaweb2), you can 308uncomment the http vhost attributes and reload Icinga 2. The apply 309rules in [services.conf](04-configuration.md#services-conf) will automatically 310generate a new service checking the `/icingaweb2` URI using the `http` 311check. 312 313``` 314/* 315* Host definitions with object attributes 316* used for apply rules for Service, Notification, 317* Dependency and ScheduledDowntime objects. 318* 319* Tip: Use `icinga2 object list --type Host` to 320* list all host objects after running 321* configuration validation (`icinga2 daemon -C`). 322*/ 323 324/* 325 * This is an example host based on your 326 * local host's FQDN. Specify the NodeName 327 * constant in `constants.conf` or use your 328 * own description, e.g. "db-host-1". 329 */ 330 331object Host NodeName { 332 /* Import the default host template defined in `templates.conf`. */ 333 import "generic-host" 334 335 /* Specify the address attributes for checks e.g. `ssh` or `http`. */ 336 address = "127.0.0.1" 337 address6 = "::1" 338 339 /* Set custom variable `os` for hostgroup assignment in `groups.conf`. */ 340 vars.os = "Linux" 341 342 /* Define http vhost attributes for service apply rules in `services.conf`. */ 343 vars.http_vhosts["http"] = { 344 http_uri = "/" 345 } 346 /* Uncomment if you've sucessfully installed Icinga Web 2. */ 347 //vars.http_vhosts["Icinga Web 2"] = { 348 // http_uri = "/icingaweb2" 349 //} 350 351 /* Define disks and attributes for service apply rules in `services.conf`. */ 352 vars.disks["disk"] = { 353 /* No parameters. */ 354 } 355 vars.disks["disk /"] = { 356 disk_partitions = "/" 357 } 358 359 /* Define notification mail attributes for notification apply rules in `notifications.conf`. */ 360 vars.notification["mail"] = { 361 /* The UserGroup `icingaadmins` is defined in `users.conf`. */ 362 groups = [ "icingaadmins" ] 363 } 364} 365``` 366 367This is only the host object definition. Now we'll need to make sure that this 368host and your additional hosts are getting [services](04-configuration.md#services-conf) applied. 369 370> **Tip** 371> 372> If you don't understand all the attributes and how to use [apply rules](17-language-reference.md#apply), 373> don't worry -- the [monitoring basics](03-monitoring-basics.md#monitoring-basics) chapter will explain 374> that in detail. 375 376#### services.conf <a id="services-conf"></a> 377 378These service [apply rules](17-language-reference.md#apply) will show you how to monitor 379the local host, but also allow you to re-use or modify them for 380your own requirements. 381 382You should define all your service apply rules in `services.conf` 383or any other central location keeping them organized. 384 385By default, the local host will be monitored by the following services 386 387Service(s) | Applied on host(s) 388--------------------------------------------|------------------------ 389`load`, `procs`, `swap`, `users`, `icinga` | The `NodeName` host only. 390`ping4`, `ping6` | All hosts with `address` resp. `address6` attribute. 391`ssh` | All hosts with `address` and `vars.os` set to `Linux` 392`http`, optional: `Icinga Web 2` | All hosts with custom variable `http_vhosts` defined as dictionary. 393`disk`, `disk /` | All hosts with custom variable `disks` defined as dictionary. 394 395The Debian packages also include an additional `apt` service check applied to the local host. 396 397The command object `icinga` for the embedded health check is provided by the 398[Icinga Template Library (ITL)](10-icinga-template-library.md#icinga-template-library) while `http_ip`, `ssh`, `load`, `processes`, 399`users` and `disk` are all provided by the [Plugin Check Commands](10-icinga-template-library.md#icinga-template-library) 400which we enabled earlier by including the `itl` and `plugins` configuration file. 401 402 403Example `load` service apply rule: 404 405``` 406apply Service "load" { 407import "generic-service" 408 409check_command = "load" 410 411/* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */ 412vars.backup_downtime = "02:00-03:00" 413 414assign where host.name == NodeName 415} 416``` 417 418The `apply` keyword can be used to create new objects which are associated with 419another group of objects. You can `import` existing templates, define (custom) 420attributes. 421 422The custom variable `backup_downtime` is defined to a specific timerange string. 423This variable value will be used for applying a `ScheduledDowntime` object to 424these services in [downtimes.conf](04-configuration.md#downtimes-conf). 425 426In this example the `assign where` condition is a boolean expression which is 427evaluated for all objects of type `Host` and a new service with name "load" 428is created for each matching host. [Expression operators](17-language-reference.md#expression-operators) 429may be used in `assign where` conditions. 430 431Multiple `assign where` conditions can be combined with `AND` using the `&&` operator 432as shown in the `ssh` example: 433 434``` 435apply Service "ssh" { 436 import "generic-service" 437 438 check_command = "ssh" 439 440 assign where host.address && host.vars.os == "Linux" 441} 442``` 443 444In this example, the service `ssh` is applied to all hosts having the `address` 445attribute defined `AND` having the custom variable `os` set to the string 446`Linux`. 447You can modify this condition to match multiple expressions by combining `AND` 448and `OR` using `&&` and `||` [operators](17-language-reference.md#expression-operators), for example 449`assign where host.address && (vars.os == "Linux" || vars.os == "Windows")`. 450 451 452A more advanced example is shown by the `http` and `disk` service apply 453rules. While one `apply` rule for `ssh` will only create a service for matching 454hosts, you can go one step further: Generate apply rules based on array items 455or dictionary key-value pairs. 456 457The idea is simple: Your host in [hosts.conf](04-configuration.md#hosts-conf) defines the 458`disks` dictionary as custom variable in `vars`. 459 460Remember the example from [hosts.conf](04-configuration.md#hosts-conf): 461 462``` 463... 464 /* Define disks and attributes for service apply rules in `services.conf`. */ 465 vars.disks["disk"] = { 466 /* No parameters. */ 467 } 468 vars.disks["disk /"] = { 469 disk_partition = "/" 470 } 471... 472``` 473 474This dictionary contains multiple service names we want to monitor. `disk` 475should just check all available disks, while `disk /` will pass an additional 476parameter `disk_partition` to the check command. 477 478You'll recognize that the naming is important -- that's the very same name 479as it is passed from a service to a check command argument. Read about services 480and passing check commands in [this chapter](03-monitoring-basics.md#command-passing-parameters). 481 482Using `apply Service for` omits the service name, it will take the key stored in 483the `disk` variable in `key => config` as new service object name. 484 485The `for` keyword expects a loop definition, for example `key => value in dictionary` 486as known from Perl and other scripting languages. 487 488Once defined like this, the `apply` rule defined below will do the following: 489 490* only match hosts with `host.vars.disks` defined through the `assign where` condition 491* loop through all entries in the `host.vars.disks` dictionary. That's `disk` and `disk /` as keys. 492* call `apply` on each, and set the service object name from the provided key 493* inside apply, the `generic-service` template is imported 494* defining the [disk](10-icinga-template-library.md#plugin-check-command-disk) check command requiring command arguments like `disk_partition` 495* adding the `config` dictionary items to `vars`. Simply said, there's now `vars.disk_partition` defined for the 496generated service 497 498Configuration example: 499 500``` 501apply Service for (disk => config in host.vars.disks) { 502 import "generic-service" 503 504 check_command = "disk" 505 506 vars += config 507} 508``` 509 510A similar example is used for the `http` services. That way you can make your 511host the information provider for all apply rules. Define them once, and only 512manage your hosts. 513 514Look into [notifications.conf](04-configuration.md#notifications-conf) how this technique is used 515for applying notifications to hosts and services using their type and user 516attributes. 517 518Don't forget to install the [check plugins](02-installation.md#setting-up-check-plugins) required by 519the hosts and services and their check commands. 520 521Further details on the monitoring configuration can be found in the 522[monitoring basics](03-monitoring-basics.md#monitoring-basics) chapter. 523 524#### users.conf <a id="users-conf"></a> 525 526Defines the `icingaadmin` User and the `icingaadmins` UserGroup. The latter is used in 527[hosts.conf](04-configuration.md#hosts-conf) for defining a custom host attribute later used in 528[notifications.conf](04-configuration.md#notifications-conf) for notification apply rules. 529 530``` 531object User "icingaadmin" { 532 import "generic-user" 533 534 display_name = "Icinga 2 Admin" 535 groups = [ "icingaadmins" ] 536 537 email = "icinga@localhost" 538} 539 540object UserGroup "icingaadmins" { 541 display_name = "Icinga 2 Admin Group" 542} 543``` 544 545#### notifications.conf <a id="notifications-conf"></a> 546 547Notifications for check alerts are an integral part or your 548Icinga 2 monitoring stack. 549 550The examples in this file define two notification apply rules for hosts and services. 551Both `apply` rules match on the same condition: They are only applied if the 552nested dictionary attribute `notification.mail` is set. 553 554Please note that the `to` keyword is important in [notification apply rules](03-monitoring-basics.md#using-apply-notifications) 555defining whether these notifications are applies to hosts or services. 556The `import` keyword imports the specific mail templates defined in [templates.conf](04-configuration.md#templates-conf). 557 558The `interval` attribute is not explicitly set -- it [defaults to 30 minutes](09-object-types.md#objecttype-notification). 559 560By setting the `user_groups` to the value provided by the 561respective [host.vars.notification.mail](04-configuration.md#hosts-conf) attribute we'll 562implicitely use the `icingaadmins` UserGroup defined in [users.conf](04-configuration.md#users-conf). 563 564``` 565apply Notification "mail-icingaadmin" to Host { 566 import "mail-host-notification" 567 568 user_groups = host.vars.notification.mail.groups 569 users = host.vars.notification.mail.users 570 571 assign where host.vars.notification.mail 572} 573 574apply Notification "mail-icingaadmin" to Service { 575 import "mail-service-notification" 576 577 user_groups = host.vars.notification.mail.groups 578 users = host.vars.notification.mail.users 579 580 assign where host.vars.notification.mail 581} 582``` 583 584More details on defining notifications and their additional attributes such as 585filters can be read in [this chapter](03-monitoring-basics.md#alert-notifications). 586 587#### commands.conf <a id="commands-conf"></a> 588 589This is the place where your own command configuration can be defined. By default 590only the notification commands used by the notification templates defined in [templates.conf](04-configuration.md#templates-conf). 591 592You can freely customize these notification commands, and adapt them for your needs. 593Read more on that topic [here](03-monitoring-basics.md#notification-commands). 594 595#### groups.conf <a id="groups-conf"></a> 596 597The example host defined in [hosts.conf](hosts-conf) already has the 598custom variable `os` set to `Linux` and is therefore automatically 599a member of the host group `linux-servers`. 600 601This is done by using the [group assign](17-language-reference.md#group-assign) expressions similar 602to previously seen [apply rules](03-monitoring-basics.md#using-apply). 603 604``` 605object HostGroup "linux-servers" { 606 display_name = "Linux Servers" 607 608 assign where host.vars.os == "Linux" 609} 610 611object HostGroup "windows-servers" { 612 display_name = "Windows Servers" 613 614 assign where host.vars.os == "Windows" 615} 616``` 617 618Service groups can be grouped together by similar pattern matches. 619The [match function](18-library-reference.md#global-functions-match) expects a wildcard match string 620and the attribute string to match with. 621 622``` 623object ServiceGroup "ping" { 624 display_name = "Ping Checks" 625 626 assign where match("ping*", service.name) 627} 628 629object ServiceGroup "http" { 630 display_name = "HTTP Checks" 631 632 assign where match("http*", service.check_command) 633} 634 635object ServiceGroup "disk" { 636 display_name = "Disk Checks" 637 638 assign where match("disk*", service.check_command) 639} 640``` 641 642#### templates.conf <a id="templates-conf"></a> 643 644Most of the example configuration objects use generic global templates by 645default: 646 647``` 648template Host "generic-host" { 649 max_check_attempts = 5 650 check_interval = 1m 651 retry_interval = 30s 652 653 check_command = "hostalive" 654} 655 656template Service "generic-service" { 657 max_check_attempts = 3 658 check_interval = 1m 659 retry_interval = 30s 660} 661``` 662 663The `hostalive` check command is part of the 664[Plugin Check Commands](10-icinga-template-library.md#icinga-template-library). 665 666``` 667template Notification "mail-host-notification" { 668 command = "mail-host-notification" 669 670 states = [ Up, Down ] 671 types = [ Problem, Acknowledgement, Recovery, Custom, 672 FlappingStart, FlappingEnd, 673 DowntimeStart, DowntimeEnd, DowntimeRemoved ] 674 675 period = "24x7" 676} 677 678template Notification "mail-service-notification" { 679 command = "mail-service-notification" 680 681 states = [ OK, Warning, Critical, Unknown ] 682 types = [ Problem, Acknowledgement, Recovery, Custom, 683 FlappingStart, FlappingEnd, 684 DowntimeStart, DowntimeEnd, DowntimeRemoved ] 685 686 period = "24x7" 687} 688``` 689 690More details on `Notification` object attributes can be found [here](09-object-types.md#objecttype-notification). 691 692 693#### downtimes.conf <a id="downtimes-conf"></a> 694 695The `load` service apply rule defined in [services.conf](04-configuration.md#services-conf) defines 696the `backup_downtime` custom variable. 697 698The ScheduledDowntime apply rule uses this attribute to define the default value 699for the time ranges required for recurring downtime slots. 700 701Learn more about downtimes in [this chapter](08-advanced-topics.md#downtimes). 702 703``` 704apply ScheduledDowntime "backup-downtime" to Service { 705 author = "icingaadmin" 706 comment = "Scheduled downtime for backup" 707 708 ranges = { 709 monday = service.vars.backup_downtime 710 tuesday = service.vars.backup_downtime 711 wednesday = service.vars.backup_downtime 712 thursday = service.vars.backup_downtime 713 friday = service.vars.backup_downtime 714 saturday = service.vars.backup_downtime 715 sunday = service.vars.backup_downtime 716 } 717 718 assign where service.vars.backup_downtime != "" 719} 720``` 721 722#### timeperiods.conf <a id="timeperiods-conf"></a> 723 724This file contains the default timeperiod definitions for `24x7`, `9to5` 725and `never`. TimePeriod objects are referenced by `*period` 726objects such as hosts, services or notifications. 727 728 729#### api-users.conf <a id="api-users-conf"></a> 730 731Provides the default [ApiUser](09-object-types.md#objecttype-apiuser) object 732named "root" for the [API authentication](12-icinga2-api.md#icinga2-api-authentication). 733 734#### app.conf <a id="app-conf"></a> 735 736Provides the default [IcingaApplication](09-object-types.md#objecttype-icingaapplication) 737object named "app" for additional settings such as disabling notifications 738globally, etc. 739