[ { "desc": "Timeout in milliseconds for the parallel RPCs made in DistributedFileSystem#getFileBlockStorageLocations(). This value is only emitted for Impala.", "display_name": "HDFS File Block Storage Location Timeout", "name": "dfs_client_file_block_storage_locations_timeout", "value": "10000" }, { "desc": "The domain to use for the HTTP cookie that stores the authentication token. In order for authentiation to work correctly across all Hadoop nodes' web-consoles the domain must be correctly set. Important: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with hostname.domain names on it.", "display_name": "Hadoop HTTP Authentication Cookie Domain", "name": "hadoop_http_auth_cookie_domain", "value": "" }, { "desc": "The user that this service's processes should run as (except the HttpFS server, which has its own user)", "display_name": "System User", "name": "process_username", "value": "hdfs" }, { "desc": "

Event filters are defined in a JSON object like the following:

\n\n
\n{\n  \"defaultAction\" : (\"accept\", \"discard\"),\n  \"rules\" : [\n    {\n      \"action\" : (\"accept\", \"discard\"),\n      \"fields\" : [\n        {\n          \"name\" : \"fieldName\",\n          \"match\" : \"regex\"\n        }\n      ]\n    }\n  ]\n}\n
\n\n

\nA filter has a default action and a list of rules, in order of precedence.\nEach rule defines an action, and a list of fields to match against the\naudit event.\n

\n\n

\nA rule is \"accepted\" if all the listed field entries match the audit\nevent. At that point, the action declared by the rule is taken.\n

\n\n

\nIf no rules match the event, the default action is taken. Actions\ndefault to \"accept\" if not defined in the JSON object.\n

\n\n

\nThe following is the list of fields that can be filtered for HDFS events:\n

\n\n\n", "display_name": "Event Filter", "name": "navigator_audit_event_filter", "value": "{\n \"comment\" : [\n \"Default filter for HDFS services.\",\n \"Discards events generated by the internal Cloudera and/or HDFS users\",\n \"(hdfs, hbase, mapred and dr.who), and events that affect files in \",\n \"/tmp directory.\"\n ],\n \"defaultAction\" : \"accept\",\n \"rules\" : [\n {\n \"action\" : \"discard\",\n \"fields\" : [\n { \"name\" : \"username\", \"match\" : \"(?:cloudera-scm|hbase|hdfs|mapred|hive|dr.who)(?:/.+)?\" }\n ]\n },\n {\n \"action\" : \"discard\",\n \"fields\" : [\n { \"name\" : \"src\", \"match\" : \"/tmp(?:/.*)?\" }\n ]\n }\n ]\n}\n" }, { "desc": "The password for the SSL keystore.", "display_name": "Hadoop User Group Mapping LDAP SSL Keystore Password", "name": "hadoop_group_mapping_ldap_keystore_passwd", "value": "" }, { "desc": "Comma-delimited list of hosts where you want to allow the Hue user to impersonate other users. The default '*' allows all hosts. To disable entirely, use a string that doesn't correspond to a host name, such as '_no_host'.", "display_name": "Hue Proxy User Hosts", "name": "hue_proxy_user_hosts_list", "value": "*" }, { "desc": "The service monitor will use this directory to create files to test if the hdfs service is healthy. The directory and files are created with permissions specified by 'HDFS Health Canary Directory Permissions'", "display_name": "HDFS Health Canary Directory", "name": "firehose_hdfs_canary_directory", "value": "/tmp/.cloudera_health_monitoring_canary_files" }, { "desc": "Path to the directory where audit logs will be written. The directory will be created if it doesn't exist.", "display_name": "Audit Log Directory", "name": "audit_event_log_dir", "value": "/var/log/hadoop-hdfs/audit" }, { "desc": "Class for user to group mapping (get groups for a given user).", "display_name": "Hadoop User Group Mapping Implementation", "name": "hadoop_security_group_mapping", "value": "org.apache.hadoop.security.ShellBasedUnixGroupsMapping" }, { "desc": "Allows the oozie superuser to impersonate any members of a comma-delimited list of groups. The default '*' allows all groups. To disable entirely, use a string that doesn't correspond to a group name, such as '_no_group_'.", "display_name": "Oozie Proxy User Groups", "name": "oozie_proxy_user_groups_list", "value": "*" }, { "desc": "Comma-separated list of compression codecs that can be used in job or map compression.", "display_name": "Compression Codecs", "name": "io_compression_codecs", "value": "org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec" }, { "desc": "Comma-separated list of users authorized to used Hadoop. This is emitted only if authorization is enabled.", "display_name": "Authorized Users", "name": "hadoop_authorized_users", "value": "*" }, { "desc": "Enable HDFS short circuit read. This allows a client co-located with the DataNode to read HDFS file blocks directly. This gives a performance boost to distributed clients that are aware of locality.", "display_name": "Enable HDFS Short Circuit Read", "name": "dfs_datanode_read_shortcircuit", "value": "true" }, { "desc": "The distinguished name of the user to bind as when connecting to the LDAP server. This may be left blank if the LDAP server supports anonymous binds.", "display_name": "Hadoop User Group Mapping LDAP Bind User", "name": "hadoop_group_mapping_ldap_bind_user", "value": "" }, { "desc": "When set, Cloudera Manager will send alerts when the health of this service reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold", "display_name": "Enable Service Level Health Alerts", "name": "enable_alerts", "value": "true" }, { "desc": "The password of the bind user.", "display_name": "Hadoop User Group Mapping LDAP Bind User Password", "name": "hadoop_group_mapping_ldap_bind_passwd", "value": "" }, { "desc": "Action to take when the audit event queue is full. Drop the event or shutdown the affected process.", "display_name": "Queue Policy", "name": "navigator_audit_queue_policy", "value": "DROP" }, { "desc": "When set, each role will identify important log events and forward them to Cloudera Manager.", "display_name": "Enable Log Event Capture", "name": "catch_events", "value": "true" }, { "desc": "For advanced use only, a string to be inserted into core-site.xml. Applies to all roles and client configurations in this HDFS service as well as all its dependent services. Any configs added here will be overridden by their default values in HDFS (which can be found in hdfs-default.xml).", "display_name": "Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml", "name": "core_site_safety_valve", "value": null }, { "desc": "The default block size in bytes for new HDFS files. Note that this value is also used as the HBase Region Server HLog block size.", "display_name": "HDFS Block Size", "name": "dfs_block_size", "value": "134217728" }, { "desc": "Enable WebHDFS interface", "display_name": "Enable WebHDFS", "name": "dfs_webhdfs_enabled", "value": "true" }, { "desc": "The name of the group of superusers.", "display_name": "Superuser Group", "name": "dfs_permissions_supergroup", "value": "supergroup" }, { "desc": "Typically, HDFS clients and servers communicate by opening sockets via an IP address. In certain networking configurations, it is preferable to open sockets after doing a DNS lookup on the hostname. Enable this property to open sockets after doing a DNS lookup on the hostname. This property is supported in CDH3u4 or later deployments.", "display_name": "Use DataNode Hostname", "name": "dfs_client_use_datanode_hostname", "value": "false" }, { "desc": "Enter a FailoverProxyProvider implementation to configure two URIs to connect to during fail-over. The first configured address is tried first, and on a fail-over event the other address is tried.", "display_name": "FailoverProxyProvider Class", "name": "dfs_ha_proxy_provider", "value": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" }, { "desc": "The search base for the LDAP connection. This is a distinguished name, and will typically be the root of the LDAP directory.", "display_name": "Hadoop User Group Mapping Search Base", "name": "hadoop_group_mapping_ldap_base", "value": "" }, { "desc": "If false, permission checking is turned off for files in HDFS.", "display_name": "Check HDFS Permissions", "name": "dfs_permissions", "value": "true" }, { "desc": "Comma-delimited list of groups that you want to allow the Hue user to impersonate. The default '*' allows all groups. To disable entirely, use a string that doesn't correspond to a group name, such as '_no_group_'.", "display_name": "Hue Proxy User Groups", "name": "hue_proxy_user_groups_list", "value": "*" }, { "desc": "Comma-separated list of groups authorized to used Hadoop. This is emitted only if authorization is enabled.", "display_name": "Authorized Groups", "name": "hadoop_authorized_groups", "value": "" }, { "desc": "Comma-delimited list of hosts where you want to allow the oozie user to impersonate other users. The default '*' allows all hosts. To disable entirely, use a string that doesn't correspond to a host name, such as '_no_host'.", "display_name": "Oozie Proxy User Hosts", "name": "oozie_proxy_user_hosts_list", "value": "*" }, { "desc": "When set, Cloudera Manager will send alerts when this entity's configuration changes.", "display_name": "Enable Configuration Change Alerts", "name": "enable_config_alerts", "value": "false" }, { "desc": "Comma-delimited list of groups that you want to allow the mapred user to impersonate. The default '*' allows all groups. To disable entirely, use a string that doesn't correspond to a group name, such as '_no_group_'.", "display_name": "Mapred Proxy User Groups", "name": "mapred_proxy_user_groups_list", "value": "*" }, { "desc": "For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration.", "display_name": "HDFS Service Environment Advanced Configuration Snippet (Safety Valve)", "name": "hdfs_service_env_safety_valve", "value": null }, { "desc": "Additional mapping rules that will be inserted before rules generated from the list of trusted realms and before the default rule. After changing this value and restarting the service, any services depending on this one must be restarted as well. The hadoop.security.auth_to_local property is configured using this information.", "display_name": "Additional Rules to Map Kerberos Principals to Short Names", "name": "extra_auth_to_local_rules", "value": null }, { "desc": "Maximum size of audit log file in MB before it is rolled over.", "display_name": "Maximum Audit Log File Size", "name": "navigator_audit_log_max_file_size", "value": "100" }, { "desc": "Enables authentication for hadoop HTTP web-consoles for all roles of this service. Note: This is effective only if security is enabled for the HDFS service.", "display_name": "Enable Authentication for HTTP Web-Consoles", "name": "hadoop_secure_web_ui", "value": "false" }, { "desc": "Quality of protection for secured RPC connections between NameNode and HDFS clients. For effective RPC protection, enable Kerberos authentication.", "display_name": "Hadoop RPC Protection", "name": "hadoop_rpc_protection", "value": "authentication" }, { "desc": "Comma-delimited list of groups that you want to allow the Hive user to impersonate. The default '*' allows all groups. To disable entirely, use a string that doesn't correspond to a group name, such as '_no_group_'.", "display_name": "Hive Proxy User Groups", "name": "hive_proxy_user_groups_list", "value": "*" }, { "desc": "Comma-separated list of users authorized to perform admin operations on Hadoop. This is emitted only if authorization is enabled.", "display_name": "Authorized Admin Users", "name": "hadoop_authorized_admin_users", "value": "*" }, { "desc": "The health check thresholds of the number of missing blocks. Specified as a percentage of the total number of blocks.", "display_name": "Missing Block Monitoring Thresholds", "name": "hdfs_missing_blocks_thresholds", "value": "{\"critical\":\"any\",\"warning\":\"never\"}" }, { "desc": "The amount of time after NameNode(s) start that the lack of an active NameNode will be tolerated. This is intended to allow either the auto-failover daemon to make a NameNode active, or a specifically issued failover command to take effect. This is an advanced option that does not often need to be changed.", "display_name": "NameNode Activation Startup Tolerance", "name": "hdfs_namenode_activation_startup_tolerance", "value": "180" }, { "desc": "Comma-delimited list of groups that you want to allow the HttpFS user to impersonate. The default '*' allows all groups. To disable entirely, use a string that doesn't correspond to a group name, such as '_no_group_'.", "display_name": "HttpFS Proxy User Groups", "name": "httpfs_proxy_user_groups_list", "value": "*" }, { "desc": "Allows the flume user to impersonate any members of a comma-delimited list of groups. The default '*' allows all groups. To disable entirely, use a string that doesn't correspond to a group name, such as '_no_group_'.", "display_name": "Flume Proxy User Groups", "name": "flume_proxy_user_groups_list", "value": "*" }, { "desc": "Comma-delimited list of hosts where you want to allow the mapred user to impersonate other users. The default '*' allows all hosts. To disable entirely, use a string that doesn't correspond to a host name, such as '_no_host'.", "display_name": "Mapred Proxy User Hosts", "name": "mapred_proxy_user_hosts_list", "value": "*" }, { "desc": "For advanced use only, a list of configuration properties that will be used by the Service Monitor instead of the current client configuration for the service.", "display_name": "Service Monitor Client Config Overrides", "name": "smon_client_config_overrides", "value": "dfs.socket.timeout3000dfs.datanode.socket.write.timeout3000ipc.client.connect.max.retries1fs.permissions.umask-mode000" }, { "desc": "

The configured triggers for this service. This is a JSON formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed.

Each trigger has all of the following fields: