bp centralized-policy-delivery
OpenStack uses a Role-Based Access Control mechanism to manage authorization, which defines if a user is able to perform actions on resources based on the roles he has assigned on them. Resources include VMs, volumes, networks, etc and are organized into projects, which are owned by domains. Users have roles assigned on domains or projects.
Users get domain or project scoped tokens, which contain the roles the user has assigned on them, and pass this token along to services in requests to perform actions on resources. The services check the roles and the scope from the token against the rules defined for the requested action on the policy.json file to determine if the user’s token has enough privileges to execute it.
In order to manage access control to services in an OpenStack cloud, operators need to use an out-of-band mechanism to update and distribute the policy.json files to the appropriate service endpoints.
Dynamic Policies aim to improve access control in OpenStack by providing an API-based mechanism for defining and delivering policies to service endpoints.
In terms of definition, policy rules will be managed in a centralized approach in the Identity server. They would be called Centralized Policies and can be associated to service endpoints using the endpoint policy API.
What this spec proposes is that, once the Centralized Policies are defined and associated, they will be fetched and cached by the associated service endpoints by Keystone Middleware.
Adjusting access control rules to better match the organization’s policy is a task done by deployers without the support of any OpenStack API.
Today, deployers update their policy files locally and use Configuration Managerment System (CMS) tools to distribute them to the appropriate service endpoints.
This approach presents a limitation that could be mitigated if the policy definition and distribution were done via API, that is the process of keeping the CMS tools in sync becomes a laborious task when the topology changes frenquently, for example when there is a variable number of service nodes behind proxies.
The Identity server already allows operators to define policy rules and manage them via the Policy API.
Those Centralized Policies can be associated to service endpoints through the Endpoint Policy API, which allows policy entities to be associated directly to service endpoints, services or regions.
The proposed change is the distribution of those policies to service endpoints transparently, by adding to Keystone Middleware the capability to fetch and cache them for the endpoint it is serving.
This mechanism will be controlled by the Identity server, as described in the spec Centralized Policies Distribution Mechanism.
The policy fetch will be based on the endpoint_id config option. This would ease the task of keeping an external CMS tool in sync with the topology of the endpoints in the cloud, since multiple processes behind a proxy would have the same config.
Once endpoint_id config is known, Middleware requests the Identity server for the respective Centralized Policy associated with it.
After fetching the policy rules, they will be cached according to the appropriate HTTP header values from server responses.
Thus, take the following response as example:
HTTP/1.1 200 OK
Cache-Control: max-age=300, must-revalidate, private
Last-Modified: Tue, 30 June 2015 13:00:00 GMT
Content-Length: 1040
Content-Type: application/json
{ ... }
This defines how long the retrieved policy is fresh for. This time delta is defined by respecting Cache-Control HTTP header from server response. Once the freshness is over, the Identity server will be asked again for an update.
This would keep the out-of-band mechanisms used today to update and distribute the policy files to endpoints.
This alternative is opposed to the centralized policy management. The new API would be reflected on every service endpoint. There are many drawbacks in this proposal, such as:
This change touches policy rules, which are sensitive data since they define access control to service APIs in OpenStack.
A potential attack vector is that a user who acquires access to the policies API in the Identity server would be able to change the centralized policy definition and association to service endpoints, being able to change access control of any service endpoint using this feature in the cloud.
Documentation exposing this security risk will be provided to warn deployers that access to the policies API must be very restricted.
None
None
Performance will be impacted when the cache has expired and Keystone Middleware needs to ask the Identity server to update it. If the Identity server has an update for it, the processing done by Middleware would need to ask oslo.policy to update the Centralized Policy, which requires I/O operations.
At enforcement time, oslo.policy will also need to load the Centralized Policy file to consider its custom rules when doing enforcement. Performance may be slightly impacted at this point as well.
Benchmarking tests will be performed in a topology where there are multiple processes running behind an HAProxy. The results will be posted in the Keystone performance wiki page.
A config switch called enable_centralized_policy will allow deployers to easily enable and disable the fetch and cache of Centralized Policies. It defaults to false, meaning that the old policy mechanism will be used by default, since no policy will be fetched from the server.
In addition, deployers may need to define the endpoint_id config for each service endpoint, as they have their own middleware filter defined in their WSGI pipeline.
None
Primary assignee:
Other contributors:
A list of related specs defining the dynamic delivery of policies can be found under the topic dynamic-policies-delivery.
Documentation will be provided with the Keystone Middleware config options.