]> git.proxmox.com Git - ceph.git/blob - ceph/doc/radosgw/config-ref.rst
import ceph quincy 17.2.4
[ceph.git] / ceph / doc / radosgw / config-ref.rst
1 ======================================
2 Ceph Object Gateway Config Reference
3 ======================================
4
5 The following settings may added to the Ceph configuration file (i.e., usually
6 ``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The
7 settings may contain default values. If you do not specify each setting in the
8 Ceph configuration file, the default value will be set automatically.
9
10 Configuration variables set under the ``[client.radosgw.{instance-name}]``
11 section will not apply to rgw or radosgw-admin commands without an instance-name
12 specified in the command. Thus variables meant to be applied to all RGW
13 instances or all radosgw-admin options can be put into the ``[global]`` or the
14 ``[client]`` section to avoid specifying ``instance-name``.
15
16 .. confval:: rgw_frontends
17 .. confval:: rgw_data
18 .. confval:: rgw_enable_apis
19 .. confval:: rgw_cache_enabled
20 .. confval:: rgw_cache_lru_size
21 .. confval:: rgw_dns_name
22 .. confval:: rgw_script_uri
23 .. confval:: rgw_request_uri
24 .. confval:: rgw_print_continue
25 .. confval:: rgw_remote_addr_param
26 .. confval:: rgw_op_thread_timeout
27 .. confval:: rgw_op_thread_suicide_timeout
28 .. confval:: rgw_thread_pool_size
29 .. confval:: rgw_num_control_oids
30 .. confval:: rgw_init_timeout
31 .. confval:: rgw_mime_types_file
32 .. confval:: rgw_s3_success_create_obj_status
33 .. confval:: rgw_resolve_cname
34 .. confval:: rgw_obj_stripe_size
35 .. confval:: rgw_extended_http_attrs
36 .. confval:: rgw_exit_timeout_secs
37 .. confval:: rgw_get_obj_window_size
38 .. confval:: rgw_get_obj_max_req_size
39 .. confval:: rgw_multipart_min_part_size
40 .. confval:: rgw_relaxed_s3_bucket_names
41 .. confval:: rgw_list_buckets_max_chunk
42 .. confval:: rgw_override_bucket_index_max_shards
43 .. confval:: rgw_curl_wait_timeout_ms
44 .. confval:: rgw_copy_obj_progress
45 .. confval:: rgw_copy_obj_progress_every_bytes
46 .. confval:: rgw_admin_entry
47 .. confval:: rgw_content_length_compat
48 .. confval:: rgw_bucket_quota_ttl
49 .. confval:: rgw_user_quota_bucket_sync_interval
50 .. confval:: rgw_user_quota_sync_interval
51 .. confval:: rgw_bucket_default_quota_max_objects
52 .. confval:: rgw_bucket_default_quota_max_size
53 .. confval:: rgw_user_default_quota_max_objects
54 .. confval:: rgw_user_default_quota_max_size
55 .. confval:: rgw_verify_ssl
56 .. confval:: rgw_max_chunk_size
57
58 Lifecycle Settings
59 ==================
60
61 Bucket Lifecycle configuration can be used to manage your objects so they are stored
62 effectively throughout their lifetime. In past releases Lifecycle processing was rate-limited
63 by single threaded processing. With the Nautilus release this has been addressed and the
64 Ceph Object Gateway now allows for parallel thread processing of bucket lifecycles across
65 additional Ceph Object Gateway instances and replaces the in-order
66 index shard enumeration with a random ordered sequence.
67
68 There are two options in particular to look at when looking to increase the
69 aggressiveness of lifecycle processing:
70
71 .. confval:: rgw_lc_max_worker
72 .. confval:: rgw_lc_max_wp_worker
73
74 These values can be tuned based upon your specific workload to further increase the
75 aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands)
76 you would look at increasing the :confval:`rgw_lc_max_worker` value from the default value of 3 whereas for a
77 workload with a smaller number of buckets but higher number of objects (hundreds of thousands)
78 per bucket you would consider decreasing :confval:`rgw_lc_max_wp_worker` from the default value of 3.
79
80 .. note:: When looking to tune either of these specific values please validate the
81 current Cluster performance and Ceph Object Gateway utilization before increasing.
82
83 Garbage Collection Settings
84 ===========================
85
86 The Ceph Object Gateway allocates storage for new objects immediately.
87
88 The Ceph Object Gateway purges the storage space used for deleted and overwritten
89 objects in the Ceph Storage cluster some time after the gateway deletes the
90 objects from the bucket index. The process of purging the deleted object data
91 from the Ceph Storage cluster is known as Garbage Collection or GC.
92
93 To view the queue of objects awaiting garbage collection, execute the following
94
95 .. prompt:: bash $
96
97 radosgw-admin gc list
98
99 .. note:: specify ``--include-all`` to list all entries, including unexpired
100
101 Garbage collection is a background activity that may
102 execute continuously or during times of low loads, depending upon how the
103 administrator configures the Ceph Object Gateway. By default, the Ceph Object
104 Gateway conducts GC operations continuously. Since GC operations are a normal
105 part of Ceph Object Gateway operations, especially with object delete
106 operations, objects eligible for garbage collection exist most of the time.
107
108 Some workloads may temporarily or permanently outpace the rate of garbage
109 collection activity. This is especially true of delete-heavy workloads, where
110 many objects get stored for a short period of time and then deleted. For these
111 types of workloads, administrators can increase the priority of garbage
112 collection operations relative to other operations with the following
113 configuration parameters.
114
115 .. confval:: rgw_gc_max_objs
116 .. confval:: rgw_gc_obj_min_wait
117 .. confval:: rgw_gc_processor_max_time
118 .. confval:: rgw_gc_processor_period
119 .. confval:: rgw_gc_max_concurrent_io
120
121 :Tuning Garbage Collection for Delete Heavy Workloads:
122
123 As an initial step towards tuning Ceph Garbage Collection to be more aggressive the following options are suggested to be increased from their default configuration values::
124
125 rgw_gc_max_concurrent_io = 20
126 rgw_gc_max_trim_chunk = 64
127
128 .. note:: Modifying these values requires a restart of the RGW service.
129
130 Once these values have been increased from default please monitor for performance of the cluster during Garbage Collection to verify no adverse performance issues due to the increased values.
131
132 Multisite Settings
133 ==================
134
135 .. versionadded:: Jewel
136
137 You may include the following settings in your Ceph configuration
138 file under each ``[client.radosgw.{instance-name}]`` instance.
139
140 .. confval:: rgw_zone
141 .. confval:: rgw_zonegroup
142 .. confval:: rgw_realm
143 .. confval:: rgw_run_sync_thread
144 .. confval:: rgw_data_log_window
145 .. confval:: rgw_data_log_changes_size
146 .. confval:: rgw_data_log_obj_prefix
147 .. confval:: rgw_data_log_num_shards
148 .. confval:: rgw_md_log_max_shards
149
150 .. important:: The values of :confval:`rgw_data_log_num_shards` and
151 :confval:`rgw_md_log_max_shards` should not be changed after sync has
152 started.
153
154 S3 Settings
155 ===========
156
157 .. confval:: rgw_s3_auth_use_ldap
158
159 Swift Settings
160 ==============
161
162 .. confval:: rgw_enforce_swift_acls
163 .. confval:: rgw_swift_tenant_name
164 .. confval:: rgw_swift_token_expiration
165 .. confval:: rgw_swift_url
166 .. confval:: rgw_swift_url_prefix
167 .. confval:: rgw_swift_auth_url
168 .. confval:: rgw_swift_auth_entry
169 .. confval:: rgw_swift_account_in_url
170 .. confval:: rgw_swift_versioning_enabled
171 .. confval:: rgw_trust_forwarded_https
172
173 Logging Settings
174 ================
175
176 .. confval:: rgw_log_nonexistent_bucket
177 .. confval:: rgw_log_object_name
178 .. confval:: rgw_log_object_name_utc
179 .. confval:: rgw_usage_max_shards
180 .. confval:: rgw_usage_max_user_shards
181 .. confval:: rgw_enable_ops_log
182 .. confval:: rgw_enable_usage_log
183 .. confval:: rgw_ops_log_rados
184 .. confval:: rgw_ops_log_socket_path
185 .. confval:: rgw_ops_log_data_backlog
186 .. confval:: rgw_usage_log_flush_threshold
187 .. confval:: rgw_usage_log_tick_interval
188 .. confval:: rgw_log_http_headers
189
190 Keystone Settings
191 =================
192
193 .. confval:: rgw_keystone_url
194 .. confval:: rgw_keystone_api_version
195 .. confval:: rgw_keystone_admin_domain
196 .. confval:: rgw_keystone_admin_project
197 .. confval:: rgw_keystone_admin_token
198 .. confval:: rgw_keystone_admin_token_path
199 .. confval:: rgw_keystone_admin_tenant
200 .. confval:: rgw_keystone_admin_user
201 .. confval:: rgw_keystone_admin_password
202 .. confval:: rgw_keystone_admin_password_path
203 .. confval:: rgw_keystone_accepted_roles
204 .. confval:: rgw_keystone_token_cache_size
205 .. confval:: rgw_keystone_verify_ssl
206
207 Server-side encryption Settings
208 ===============================
209
210 .. confval:: rgw_crypt_s3_kms_backend
211
212 Barbican Settings
213 =================
214
215 .. confval:: rgw_barbican_url
216 .. confval:: rgw_keystone_barbican_user
217 .. confval:: rgw_keystone_barbican_password
218 .. confval:: rgw_keystone_barbican_tenant
219 .. confval:: rgw_keystone_barbican_project
220 .. confval:: rgw_keystone_barbican_domain
221
222 HashiCorp Vault Settings
223 ========================
224
225 .. confval:: rgw_crypt_vault_auth
226 .. confval:: rgw_crypt_vault_token_file
227 .. confval:: rgw_crypt_vault_addr
228 .. confval:: rgw_crypt_vault_prefix
229 .. confval:: rgw_crypt_vault_secret_engine
230 .. confval:: rgw_crypt_vault_namespace
231
232 SSE-S3 Settings
233 ===============
234
235 .. confval:: rgw_crypt_sse_s3_backend
236 .. confval:: rgw_crypt_sse_s3_vault_secret_engine
237 .. confval:: rgw_crypt_sse_s3_key_template
238 .. confval:: rgw_crypt_sse_s3_vault_auth
239 .. confval:: rgw_crypt_sse_s3_vault_token_file
240 .. confval:: rgw_crypt_sse_s3_vault_addr
241 .. confval:: rgw_crypt_sse_s3_vault_prefix
242 .. confval:: rgw_crypt_sse_s3_vault_namespace
243 .. confval:: rgw_crypt_sse_s3_vault_verify_ssl
244 .. confval:: rgw_crypt_sse_s3_vault_ssl_cacert
245 .. confval:: rgw_crypt_sse_s3_vault_ssl_clientcert
246 .. confval:: rgw_crypt_sse_s3_vault_ssl_clientkey
247
248
249 QoS settings
250 ------------
251
252 .. versionadded:: Nautilus
253
254 The ``civetweb`` frontend has a threading model that uses a thread per
255 connection and hence is automatically throttled by :confval:`rgw_thread_pool_size`
256 configurable when it comes to accepting connections. The newer ``beast`` frontend is
257 not restricted by the thread pool size when it comes to accepting new
258 connections, so a scheduler abstraction is introduced in the Nautilus release
259 to support future methods of scheduling requests.
260
261 Currently the scheduler defaults to a throttler which throttles the active
262 connections to a configured limit. QoS based on mClock is currently in an
263 *experimental* phase and not recommended for production yet. Current
264 implementation of *dmclock_client* op queue divides RGW Ops on admin, auth
265 (swift auth, sts) metadata & data requests.
266
267
268 .. confval:: rgw_max_concurrent_requests
269 .. confval:: rgw_scheduler_type
270 .. confval:: rgw_dmclock_auth_res
271 .. confval:: rgw_dmclock_auth_wgt
272 .. confval:: rgw_dmclock_auth_lim
273 .. confval:: rgw_dmclock_admin_res
274 .. confval:: rgw_dmclock_admin_wgt
275 .. confval:: rgw_dmclock_admin_lim
276 .. confval:: rgw_dmclock_data_res
277 .. confval:: rgw_dmclock_data_wgt
278 .. confval:: rgw_dmclock_data_lim
279 .. confval:: rgw_dmclock_metadata_res
280 .. confval:: rgw_dmclock_metadata_wgt
281 .. confval:: rgw_dmclock_metadata_lim
282
283 .. _Architecture: ../../architecture#data-striping
284 .. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
285 .. _Cluster Pools: ../../rados/operations/pools
286 .. _Rados cluster handles: ../../rados/api/librados-intro/#step-2-configuring-a-cluster-handle
287 .. _Barbican: ../barbican
288 .. _Encryption: ../encryption
289 .. _HTTP Frontends: ../frontends