Open Access

Access control as a service for the Cloud

  • Nikos Fotiou1Email author,
  • Apostolis Machas1,
  • George C Polyzos1 and
  • George Xylomenos1
Journal of Internet Services and Applications20156:11

DOI: 10.1186/s13174-015-0026-4

Received: 29 October 2014

Accepted: 8 April 2015

Published: 1 June 2015


Cloud computing has become the focus of attention in the computing industry. However, security concerns still impede the widespread adoption of this technology. Most enterprises are particularly worried about the lack of control over their outsourced data, since the authentication and authorization systems of Cloud providers are generic and they cannot be easily adapted to the requirements of each individual enterprise. An adaptation process requires the creation of complex protocols, often leading to security problems and “lock-in” conditions. In this paper we present the design of a lightweight access control solution that overcomes these problems. With our solution access control is offered as a service by a third trusted party, the Access Control Provider. Access control as a service enhances end-user privacy, eliminates the need for developing complex adaptation protocols, and offers data owners flexibility to switch among Cloud providers, or to use multiple, different Cloud providers concurrently. As a proof of concept, we have implemented and incorporated our solution in the popular open-source Cloud stack OpenStack. Moreover, we have designed and implemented a Web application that enables the incorporation of our solution into Google Drive.


Authorization Authentication Delegation Security Policies

1 Introduction

Cloud computing is a technology that offers a cost-effective way for outsourcing data storage and computation. Nevertheless, despite its intriguing properties, enterprises are reluctant to fully adopt it, since they are concerned–among other things–about losing the governance of their outsourced assets, i.e., losing the ability to enforce their own, enterprise-specific, security policies. According to PwC’s Global State of Information Security Survey 2012 [1], the largest perceived Cloud security risk is the “uncertain ability to enforce provider security policies,” whereas according to the survey of Subashini and Kavitha [2] one of the biggest security challenges for providing Cloud-based services is the “adherence of the Cloud provider to the security policies of its clients,” as well as “the administration of user authorization systems”. This mismatch between provider-enterprise security policies severely impedes Cloud adoption and further research on effective solutions for this problem is required. Indeed, “effective models for managing and enforcing data access policies, regardless of whether the data is stored in the Cloud or cached locally on client devices” was identified back in 2010 as a top research priority, by the European Network and Information Security Agency (ENISA) [3].

One question that may arise is how likely loss of governance of the outsourced data is, and what is its impact. According to ENISA’s Cloud Computing Security Risk Assessment report [4], the loss of governance is a risk with very high probability and very high impact. The same report states that two of the vulnerabilities that may expose an enterprise to that risk are “unclear roles and responsibilities” and “poor enforcement of role definition.” This outcome comes as no surprise, since the organizational structure and the security policies of an individual enterprise cannot be easily captured by a Cloud provider. Moreover, the interoperability between an enterprise and a Cloud provider requires the development of complex communication protocols; this, however, increases the chances of a security breach due to implementation errors, according to the Cloud Security Alliance [5]. Armando et al. [6] exploited such implementation errors in order to bypass the SAML-baseda single sign-on system of Google apps. Similarly, Somorovsky et al. [7] gained access to multiple SAML-based systems by exploiting implementation bugs. Nevertheless, even if the developed protocol is implemented correctly, it will be Cloud provider specific, thus hindering the migration of an enterprise to another Cloud provider; this condition is known as lock-in, and has been identified as a high probability risk by ENISA [4].

In this paper, we propose a novel solution that enables a trusted entity to store enterprise-specific security policies and take access control decisions on behalf of a Cloud provider: the Cloud provider then has only to respect the access control decision. This trusted entity, which is referred to as the Access Control Provider (ACP), may as well be provided by the enterprise itself, for example, by leveraging its user management system, or by a third party. Compared to existing systems, our solution offers better end-user privacy and requires a much simpler communication protocol.

This paper extends our previous work presented in [8], with a more detailed system description, an additional proof of concept implementation, more extensive overhead evaluation, and further comparison with existing systems. The paper is organized as follows. In Section 2 we discuss related work in this area. In Section 3 we detail our scheme. In Section 4 we present our prototype that implements a secure private Cloud file storage service using OpenStack, an open source Cloud stack, as well as a Web application that enables the incorporation of our solution in Google Drive. In Section 5 we evaluate the security properties of our solution and analyze its performance. Finally, in Section 6 we discuss further extensions to our solution and we conclude in Section 7.

2 Related work

Many legacy systems rely on Role Based Access Control (RBAC) for controlling access to resources stored by 3 r d parties (e.g., Cloud providers, web servers). These systems (e.g., [9-12]) usually adopt one of the following approaches for enforcing access control policies: (a) they either employ an existing language (such as XACML [13]) or define their own to specify the access control policy, which is then interpreted and enforced by the Cloud, or (b) they use cryptographic solutions (such as attribute based encryption [14]) to encrypt data in such a way that only authorized users can decrypt them. RBAC has a role that is orthogonal to our system: RBAC policy definition languages and roles can be used by the ACPs, whereas data stored in the cloud can be encrypted based on roles. Our system is concerned with access control delegation, rather than access control enforcement, where an RBAC solution may be used.

Single Sign-On (SSO) systems–such as Kerberos and, more recently, OpenID 2.0 [15] and OAuth 2.0 [16]–have similar goals with our scheme. Kerberos has been widely used for controlling access to network resources. In a Kerberos system a Ticket Granting Service (TGS) provides a “ticket" to an authenticated user that enables her to use a resource. The TGS and the resource, however, have to belong to the same administration domain or they should be pre-configured with a shared secret. Our system requires neither common administrative domains nor pre-shared secrets.

OpenID is an identity management system that allows identity management delegation to a third trusted party, known as the Identity Provider (IdP). IdPs authenticate users and provide them with an “authentication token”, which they can use to access a resource. OpenID has been studied in the context of Cloud computing. Nunez et al. [17] used OpenID in conjunction with proxy re-encryption in order to provide Cloud based identity management services. Similarly, Khan et al. [18] have implemented OpenID based authentication mechanisms for the OpenStack platform. OpenID provides only user authentication, therefore, in an OpenID-based access control system, the Cloud provider is responsible for evaluating the access control policies. Moreover, the authentication token is unique per user, therefore user activity can be tracked. In our system access control policies are evaluated by ACPs and not by the Cloud providers. In addition, in our system tokens are ephemeral, therefore they cannot be used to track the long term activity of a specific user.

OAuth 2.0 is an IETF standard for authorizing access to resources over HTTP. OAuth 2.0 requires the resource owner to be online during the user authorization procedure (Section 1.2 of [16]), and requires implicitly the development of a communication protocol between the resource server and the authorization server in order to be able to exchange an access token whose form–as mentioned in Section 1.4 of [16]–is not specified. This vagueness impedes implementations of systems where the resource server and the authorization server belong to different administrative domains. An approach for implementing access control using OAuth 2.0 is the following: an access control policy based on attributes that can be provided by an authorization server (e.g., user age, as provided by a social network) is defined and stored in the Cloud, the Cloud provider accesses the required attributes using OAuth 2.0 and uses them to evaluate the access control policy. In this scenario, the Cloud provider not only learns some information about the user (in this example his age), but it is also able to interpret them. In our system, Cloud providers neither learn anything about users nor do they have to understand any enterprise-specific semantics.

Policy Based Admission Control [19] is a framework that allows a Policy Enforcement Point (PEP) to delegate access control policy decisions to a Policy Decision Point (PDP). Each Cloud provider can operate a PEP, whereas PDPs can be implemented by third trusted parties, or even the enterprises themselves. A PEP is responsible for collecting all the information required by a PDP, which includes information about the user that requests access. Moreover, a PEP and a PDP should agree on a, usually complex, communication protocol (e.g., COPS [20]). With our solution, Cloud providers are completely oblivious about access control policies. Moreover, Cloud providers neither collect nor learn any information about users. Finally, our communication protocol is much simpler, therefore less prone to implementation errors.

The Security Assertion Markup Language (SAML) is an XML-based security assertion language [21], used for exchanging authentication and authorization statements about subjects. Being a language and not a system, SAML is orthogonal to our work. As a matter of fact, messages in our scheme can be exchanged via SAML, using the Authentication Request Protocol (Section 3.4 of [21]).

3 System design

3.1 Overview

Our scheme is composed of the following entities: the data owner (owner), the data consumer (consumer), the Cloud provider (CP), and the access control provider (ACP). The goal of an owner is to store some data in a CP and allow authorized consumers to perform operations over this data. Each operation is protected by an access control policy. An access control policy is stored in an ACP and maps the identity of a consumer to a boolean output (true, false). When the output of an access control policy is true, the consumer that provided the identification data is considered authorized.

In our scheme, the following trust relationships are considered: owners trust ACPs to authorize consumers, and owners and consumers trust CPs to respect the decisions of ACPs. The first type of trust relationship can be trivially established if the ACP is implemented by the owner (e.g., the ACP leverages the enterprise’s user management system). The second type of trust relationship is a relaxed form of the trust relationship that currently exists between an owner and a Cloud provider: in a contemporary Cloud system where access control is implemented in the Cloud, an owner trusts a Cloud provider to (i) securely store some enterprise-specific security policies (ii) to use these policies correctly, i.e., understand their semantics, and (iii) to enforce the outcome of the access control decision.

As illustrated in Figure 1 a typical transaction in our system takes place as follows. Initially, an owner stores an access control policy in an ACP (step 1) and obtains a URI for that policy (step 2). As a next step, she implements an operation over some data in a CP and stores the URI of the policy that protects this operation (step 3). When a consumer tries to perform a protected operation for the first time (step 4), she receives in response the URI of the access control policy that protects the operation and a unique token (step 5). Then, the consumer authenticates herself to a suitable ACP by providing some form of identification data and requests authorization for the access control policy specified in the obtained URI (step 6). If the consumer “satisfies” the access control policy, the ACP signs the token and sends it back to the consumer (step 7). The consumer repeats her request to the CP including this time the signed token (step 8). The CP checks the validity of the token and if the token is valid it executes the desired operation and returns its output (step 9).
Figure 1

Scheme overview.

3.2 Goals

Our goal is to build a system in which the following properties hold:
  • The system is secure: Provided that all system entities respect the trust relationships described previously, it shall not be possible for an unauthorized user to perform a protected operation.

  • Consumer privacy is preserved: A CP shall gain minimal information about the identity of a consumer. Ideally it will only learn that a consumer can be authorized by a specific ACP. Moreover an ACP should not be able to tell the operation a consumer wants to perform or the data she accesses.

  • Data can be easily migrated among different Cloud providers: The only entities that should be aware of an access control policy and its implementation details are the ACP and the owner. CPs shall be oblivious about the access control policy implementation details. Therefore, if two CPs implement our solution, moving data from one CP to another shall be almost as trivial as copy-pasting it.

  • An access control policy does not reveal anything about the data and the operations it protects: Access control policies should be decoupled from the data and the operations they protect. An access control policy should be defined taking into account solely consumer attributes.

  • Access control policies are re-usable: An access control policy should not be bound to a particular operation. It should be possible to protect many and diverse data items, stored in multiple CPs.

  • An access control policy can be easily modified: The modification of an access control policy shall not involve CPs: the only entity involved in the modification of an access control policy should be the ACP where the policy is stored.

3.3 Detailed system description

We now detail our system design (Figure 2). We have made the following assumptions: (i) ACPs and CPs have a public-private key pair, (ii) ACP’s and CP’s public keys are known to the consumers and (iii) all messages are exchanged over a secure channel. Throughout this section the notation of Table 1 is used.
Figure 2

System procedures.

Table 1


P u b CP

The public key of a CP

P u b ACP

The public key of an ACP

U R I ap

The URI of an access control policy

S i g n ACP (Y)

The digital signature of plaintext Y generated using the private key of an ACP

Our system consists of the following procedures:

3.3.1 3.3.1 Access control policy creation and data storage

With this procedure an owner creates and stores an access control policy in an ACP. The ACP in return provides a U R I ap . For each protected operation implemented in a CP, the owner defines the U R I ap of the policy that protects it and the P u b ACP of the ACP where the policy is stored. This information is maintained in the CP’s Access Table that contains tuples of the form:
$$[operation, URI_{ap}, Pub_{ACP}] $$

A U R I ap is re-usable, i.e., it can be used for protecting multiple operations stored in various CPs. The mechanisms for creating an access control policy and for updating an Access Table are ACP specific and CP specific, respectively.

3.3.2 3.3.2 Unauthorized request

This procedure is executed by a consumer in order to perform an operation for the first time. The consumer sends an operation request message to the CP. Upon receiving the request the CP creates a unique token (i.e., an adequately large random number) and sends it back to the consumer, along with the corresponding U R I ap . Therefore, the following exchange of messages takes place:
  1. (1)

    :C o n s u m e rC P:o p e r a t i o n r e q u e s t

  2. (2)

    :C PC o n s u m e r:U R I ap ,T o k e n

In order to keep track of the generated tokens, a CP maintains a Token Table that contains entries of the form:
$$[Token, authenticated, expires, URI_{ap}]</p><p class="noindent">$$

When a new token is generated, a new entry is added to this table. The value of the authenticated field of this entry is set to false and the value of the expires field to the generation time plus a very small amount of time, sufficient to obtain an authorization.

3.3.3 3.3.3 Consumer authentication and authorization request

This procedure is executed by a consumer upon receiving a response to an unauthorized request. Firstly, the consumer sends her identification data, P u b CP , U R I ap and the token to an ACP responsible for evaluating the access control policy identified by U R I ap . If the consumer satisfies U R I ap , the ACP creates an authorization message that contains the token, the amount of time that the token should be valid (i.e., its lifetime), U R I ap , and P u b CP . Then it signs this message and sends it back to the consumer. Therefore, the following messages are exchanged:
  1. (3)

    :C o n s u m e rA C P:I D,P u b CP ,U R I ap ,T o k e n

  2. (4)

    :A C PC o n s u m e r:a u t h,S i g n ACP (a u t h)

$$auth = Token, Lifetime, URI_{ap}, Pub_{CP}</p><p class="noindent">$$

3.3.4 3.3.4 Authorized request

This procedure is executed by an ACP authorized consumer in order to perform an operation. The consumer sends a message that includes the operation request, the token, the token’s lifetime and the signature of the authorization message (i.e., message (4)). Therefore the following message is sent:
  1. (5)

    :C o n s u m e rC P:o p e r a t i o n r e q u e s t,T o k e n,L i f e t i m e,S i g n ACP (A u t h)

Upon receiving this message, a CP should decide if the consumer is allowed to perform the requested operation. Therefore, it executes the following algorithm (Figure 3):
  1. 1.
    Retrieve the entry of the Token Table that contains the token and check if the token has expired. If it has expired, return an error
    Figure 3

    Authorized request decision process.

  2. 2.
    If the authenticated field of the corresponding record in the Token Table is false then
    1. (a)

      Retrieve the P u b ACP that corresponds to the operation from the Access Table

    2. (b)

      Retrieve the U R I ap that corresponds to the token from the Token Table

    3. (c)

      Reconstruct the authorization message

    4. (d)

      Verify S i g n ACP (a u t h), using P u b ACP

    5. (e)

      If the signature verification succeeds, update the Token Table entry as follows: set the expires field equal to the LifeTime field of the authorization message and set the authenticated field to true. Proceed to Step 3a below.

    6. (f)

      If the signature verification fails, return an error

  3. 3.
    If the authenticated field of the corresponding record in the Token Table is true then
    1. (a)

      Find the U R I ap that corresponds to the token from the Token Table

    2. (b)

      Find the U R I ap of the requested operation from the Access Table

    3. (c)

      Check if the retrieved values match. If they match return, else return an error


If this procedure is successful then any subsequent authorized request may include only the token. Moreover, the same token can be used multiple times, even for invoking different operations protected by the same U R I ap .

3.4 Use case

Let us now illustrate our scheme through a use case. Enterprise A has outsourced sales records storage and analysis to Cloud provider C P A . The operations implemented in C P A are: update sales records, calculate statistics, and view statistics. Enterprise A has the following access control policies:
  • Policy 1: All sales department employees can update sales records

  • Policy 2: Only the sales department director can calculate statistics

  • Policy 3: All shareholders can view the statistics

Enterprise A implements the above access control policies in an ACP owned by itself. The public key of this ACP is denoted by P u b ACP . For each policy the ACP generates a URI, i.e.,, and C P A ’s Access Table is updated as shown in Table 2.
Table 2

CP A access table new entries


U R I ap

ACP public key

Update records

P u b ACP

Calculate statistics

P u b ACP

View statistics

P u b ACP

The sales department director issues an unauthorized request for the calculate statistics operation. C P A generates a token, namely T o k e n 1, and responds by sending the following message (, T o k e n 1). CP’s Token Table is then updated with the entry shown in Table 3.
Table 3

CP A token table new entries




U R I ap

T o k e n 1


t i m e s t a m p1

As a next step, the sales department director authenticates himself to the ACP, which responds with the following, digitally signed, authorization message: (T o k e n 1, timestamp2,, \(Pub_{CP_{A}}\)). Then, the sales department director issues the following authorized request: (“calculate statistics”, T o k e n 1, timestamp2, S i g n ACP (a u t h)). C P A checks if T o k e n 1 has expired. Then, it reconstructs the authorization message by retrieving the U R I ap associated with the calculate statistics operation (i.e., from the Access Table and verifies S i g n ACP (a u t h) using P u b ACP (also found in the Access Table). Finally, C P A checks if the U R I ap found in the Access Table matches the U R I ap included in the entry for T o k e n 1 in the Token Table. If all these steps are successful, C P A executes the calculate statistics operation and modifies the entry for T o k e n 1 in the Token Table as shown in Table 4.
Table 4

CP A token table modified entry




U R I ap

T o k e n 1


t i m e s t a m p2

Since T o k e n 1 is now marked as authenticated, the sales department director can use it in all subsequent requests, until it expires. Moreover, as long as T o k e n 1 remains valid, S i g n ACP (a u t h) does not have to be included in subsequent requests.

3.5 The “level” extension

In the above use case, it can be observed that if the sales department director wishes to invoke the update records operation, he has to re-authenticate himself, since this operation is protected by a different U R I ap . The level extension mitigates this shortcoming by adding a new field to an Access Table: the consumer level. The consumer level is a number that denotes the minimum level that a consumer should have in order to invoke an operation. Using this extension, the Access Table of the Cloud provider considered in the use case of Section 3.4 can be modified as shown in Table 5.
Table 5

CP A access table using level extension


U R I ap


Update records


Calculate statistics


View statistics


The ACP public key column is not shown.

With this extension, an ACP has to include the consumer level in the authorization messages. Moreover, a CP now takes part in the access control decision, since it has to check if the level included in the authorization message is greater or equal to the level included in the Access Table. Finally, if the level extension is used, Token Tables should, additionally, include the level that corresponds to a token.

Suppose that the level of the sales department director in the previous use case is 200. Then, he would be able to successfully invoke the update records operation, using T o k e n 1, without re-authenticating himself.

4 Implementation

As a proof of concept we implemented a secure file storage service using a popular open source Cloud stack, the OpenStack [22], as well as a Web application that allows the incorporation of our solution in Google Drive [23]. The ACP and the consumer software used in both implementations are the same. Our implementation supports the level extension. As a public-key encryption system we use RSA. Public keys are encoded in JSON format using the keyCzar [24] python library. The keyCzar library is also used for generating digital signatures.

4.1 ACP and consumer software

The ACP of our proof of concept is implemented as a PHP application hosted in an Apache web server. An SQLite database is used for storing username-password pairs, as well as username to U R I ap -level mappings. Usernames are unique and a username can be mapped to many U R I ap -level pairs (e.g., Table 6). The consumer software implements the authentication and authorization request, by encoding the username, the password and the request parameters in a JSON object and by POSTing this object to a particular URL, using HTTPS. The response to this request is again encoded in a JSON object. The consumer software has been pre-configured with the public keys of the CP and the ACP components.
Table 6

An instance of the user managements system

















U R I ap


























4.2 OpenStack-based implementation

For our OpenStack-based CP (Figure 4), we leveraged the functionality of the OpenStack component Swift, which is used for building object storage systems. A Swift-based object storage system is composed of two networks: the internal (private) network that consists of storage nodes, and the external (public) network that consists of a proxy server and (optionally) an authentication server. The proxy server accepts HTTP(S) requests and processes them using a Web Server Gateway Interface. The parameters used in each request are encoded in HTTP headers. Each request is pipelined through a number of add-ons, each of which may transform it, forward it, or respond on behalf of the system to the user.
Figure 4

OpenStack-based implementation.

Objects stored in a Swift-based system are organized in a three level hierarchy. The topmost level of this hierarchy is the accounts level, followed by the containers level (second level) and the objects level (third level). The accounts level contains user accounts. Each user account is associated with many containers from the containers level. A container is used for organizing objects, therefore a container is associated with many objects from the objects level. An object may be a file or a folder (that contains other objects). Every object within a container is identified by a container-unique name. Each request for an operation over an object contains a URI that denotes the account, the container and the name of the object in question, i.e., it is of the form “https://CPHostName/accountname/containername/objectname”.

We implemented our system as a Swift add-on added in the pipeline of the add-ons that process incoming requests. This add-on replaces Keystone; the default OpenStack component that handles user authentication. Our implementation allows file storage and retrieval, as well as the following operations over the stored files: organizing files in containers, listing the files of a container, copying a file, moving a file and deleting a file. Token and Access Tables are implemented as SQLite tables. An owner hard codes in the Access Table records of the form: [path, U R I ap ,level, P u b ACP ]. A path may be account-wide, container-wide, or object-wide.

Initially, the consumer software sends an unauthorized GET/POST request over HTTPS. The desired operation is specified in a HTTP header and the URL of the request denotes the object (or the container, or the account) that will be used as input to the operation. When an unauthorized request is pipelined through our add-on, the add-on checks if a U R I ap exists in the Access Table for the URL specified in the request: if such a U R I ap exists, the add-on generates a new token, using the token generation mechanism provided by Swift, and creates a response (as described in Section 3.3); each part of the response is encoded in a HTTP header. The add-on then creates a new entry in the Token Table. The initial expiration time of a token is set equal to the current time plus 10 sec. Upon receiving the response, the consumer software initiates the authentication and the authorization process described in Section 4.1. As a next step, the consumer software sends an authorized request, encoding all request parameters in HTTP headers. The add-on executes the authorized request decision algorithm and produces the appropriate output.

4.3 Google drive-based web application

Google Drive is a popular Cloud based storage service. Google Drive provides a rich API that can be used for building applications that interact with the service over HTTPS. In our implementation we used this API and built a Web application that extends (part of) the Google Drive API, thus providing support for our protocol (Figure 5). Our application is built using the Google App Engine [25] and the Python language. Access Tables and Token Tables have been implemented using the Google App Engine Datastore. Currently, our application supports operations for uploading and downloading files. Each operation can be invoked by making an HTTPS call to the operation-specific URL. All call parameters are encoded in HTTP headers.
Figure 5

Google drive-based web application.

Our application has been configured with a Google Drive account which is kept secret. Instead of interacting with the “drive” directly, the consumer software interacts with the application, which acts as a middleware, ensuring that only an authorized consumer can perform the implemented operations. The consumer learns no information about the Google Drive account.

The owner hard codes in the web application a U R I ap that controls who can invoke the upload file operation. A consumer initially performs an unauthorized request for uploading a file (the file is not included in this request). The web application generates a token using the UUID Python function, it responds to the consumer by encoding the token in an HTTP header and updates the Token Table. The consumer software initiates the authentication and the authorization process described in Section 4.1. Then, it issues an authorized request, by encoding the request parameters in HTTP headers and the file as raw POST data. The web application executes the authorized request decision algorithm and if the consumer is allowed to upload the file, it stores it in the Google Drive. When uploading files, consumers are able to specify a U R I ap that controls who can invoke the download file operation for that specific file.

5 Evaluation

5.1 Security evaluation

It can be easily observed that our system enhances consumer privacy. The only information that a CP learns about a consumer is his trust relationship with a particular ACP; if the level extension is used, the CP also learns his level. Of course, the latter can be encoded in a way that reveals no meaningful information. Any other sensitive information is stored in a (trusted) ACP. Moreover, regardless of the lifetime of a token, a consumer may drop it and request a new one in order to avoid CP profiling. Finally, an ACP gains no information about the operations a consumer invokes and the data he accesses: the only information that an ACP learns is the public key of the CP with which the consumer interacts.

Another security feature of our system is that access control policies can be easily modified. Access control policies are stored in a single point (i.e., the ACP) and all CPs have pointers to policies. Therefore, the modification of an access control policy does not involve communication with any CP. When an access control policy is modified, all new consumers will be authorized using the new policy, whereas all already authorized consumers will be re-authorized with the new policy when their token expires.

We now proceed to the security analysis of our system using the threat model proposed by Wang at al. [26], adapted to our system. In our analysis we consider three different attack scenarios. In all scenarios we assume that messages are exchanged over a secure channel and communication endpoints cannot lie about their identity. We do not consider the case in which a malicious entity acts as an ACP and steals the credentials of a consumer, since this attack is out of the scope of our system.

5.1.1 5.1.1 Malicious entity acting as a consumer

In this attack scenario a malicious entity, C o n M , tries to perform an operation protected by an access control policy U R I leg stored in A C P A . C o n M can only be authorized for the access control policy U R I mal , also stored in A C P A . C o n M ’s goal is to obtain an authorization message of the form (T o k e n,L e v e l,L i f e t i m e,U R I leg ,P u b CP ). By following our protocol C o n M will receive an authorization message of the form (T o k e n,L e v e l,L i f e t i m e,U R I mal ,P u b CP ). If C o n M includes the signature of this message in his authenticated request, the authorized request decision algorithm will result in an error, since the CP will generate a different authorization message for which this signature is not valid (Figure 6). The only way for C o n M to obtain a valid signature is to include U R I leg in the authentication and authorization request, i.e., C o n M should send to A C P A an authentication and authorization request of the following form: (I D,P u b CP ,U R I leg ,T o k e n). However, since C o n M does not abide by U R I leg this message will result in an error.
Figure 6

Malicious entity acting as a consumer.

5.1.2 5.1.2 Malicious entity acting as a CP

In this attack scenario the attacker’s goal is to perform an operation in C P A , protected by an access control policy U R I A stored in A C P A . The attacker is able to pretend to be a Cloud provider, C P mal , as well as to lure a consumer C o n L that can be authorized for U R I A , to perform this operation. Therefore, this is a man-in-the-middle type of attack.

The attacker initially sends an unauthorized request to C P A and receives T o k e n A and U R I A . In order for this attack to be successful the attacker has to obtain an authorization message of the form \((\mathbf {Token_{A}}, Level, Lifetime,URI_{A},\mathbf {Pub_{CP_{A}}})\phantom {\dot {i}\!}\). C o n L is lured to send an unauthorized request to C P mal (i.e., to the attacker), which responds with a message of the form: (U R I A ,T o k e n A ). Subsequently, C o n L sends an authentication and authorization request to A C P A of the following form: \((ID, Pub_{CP_{\textit {mal}}}, URI_{A}, \mathbf {Token_{A}})\), and receives the following authorization message \((\mathbf {Token_{A}},Level,Lifetime,URI_{A}, \mathbf {Pub_{CP_{\textit {mal}}}})\phantom {\dot {i}\!}\). If the attacker sends an authorized request using the signature of the previous message the authorized request decision algorithm will result in an error, since C P A will generate an authorization message that includes \(Pub_{CP_{A}}\) and not \(Pub_{CP_{\textit {mal}}}\) (Figure 7).
Figure 7

Malicious entity acting as a CP.

5.1.3 5.1.3 Malicious entity co-located with a consumer

This attack scenario is applicable when a CP maintains a user management system and associates operations over protected data with particular users (e.g., for charging reasons). In tis scenario a CP also maintains in its Token Table the identifier of the (CP) user for whom the token has been generated. The goal of an attacker in this scenario is to make a CP believe that a consumer C o n L wants to perform a protected operation. In this scenario the attacker is a valid CP user and he is eligible to perform the same operations as C o n L . Moreover, the attacker is able to inject messages on behalf of C o n L .

In this attack scenario, the attacker requests to perform an operation O P A and proceeds through all steps until he receives the authorization message. At this point, instead of sending an authorized request on behalf of himself, he sends it on behalf of C o n L . It can be easily observed that this attack is trivially mitigated since the CP also maintains the identifiers of the users that correspond to each token, therefore this message will be rejected (Figure 8). It should be noted, however, that this is possible due to our design choice to have the CP generating the tokens, which is not always the case in other similar systems. This attack, for example, was successfully exploited by Wang at al. [26] against three popular websites that were using Facebook Connect and Twitter OAuth for associating their user accounts with their corresponding Facebook and Twitter profiles.
Figure 8

Malicious entity co-located with a consumer.

5.2 Overhead

In our implementation, HTTP methods are used for invoking the desired operation. As a public-key encryption system we use RSA. The size of an RSA public key is 2048 bits, whereas the size of a JSON encoded public key is 400 bytes. Tokens are encoded in 32 byte hex-strings, digital signatures in 512 byte hex-strings and token lifetimes in 8 byte hex-strings. Finally, a single byte is used for representing access levels. When a consumer wants to invoke an operation in a CP, protected by a U R I ap , a number of messages has to be exchanged. If an ACP has already generated for the consumer an authorization message for U R I ap and the corresponding token has not expired, then a single message from the consumer to the CP has to be sent. In any other case five messages have to be exchanged: three between the consumer and the CP, and two between the consumer and the ACP.

It can therefore be observed that an ACP and a consumer have a strong motive to use long-lasting tokensb: the longer the duration of a token, the less the communication overhead for an ACP and a consumer. On the other hand, long-lasting tokens increase the state that a CP has to maintain in its Token Table. In order to illustrate this tradeoff, we simulate the following scenario: we consider a CP that hosts files of 100 different enterprises. Each enterprise has defined a single protected operation. Moreover, each enterprise has 100 employees who invoke the operation stored in the CP following a Poisson process with rate 0.1/min. We simulate a usage period of 8 hours and every 5 min we measure the average network load of each enterprise (caused by the messages exchanged with the ACP), as well as the size of the CP’s Token Table (the measured size is the average value of all the sizes the Token Table had within the 5 min measurement period). We consider two types of tokens: a token with short lifetime (20 min) and a token with long lifetime (2 hours). Figure 9 illustrates the average Token Table size of the CP throughout the simulation period, whereas Figure 10 illustrates the average number of messages transmitted inside each enterprise’s network, throughout the simulation period.
Figure 9

Average number of Token Table entries as a function of token lifetime, using 5 minute sampling periods.

Figure 10

Number of messages exchanged between a consumer and an ACP as a function of token lifetime, using 5 minute sampling periods. During the lifetime of a token, no messages are exchanged.

5.3 Comparison with existing systems

We now compare our solution with two popular related systems: Google Drive and Amazon S3.

5.3.1 5.3.1 Google drive

The Google Drive Cloud-based storage service, enables users to access, share, and organize their files in the Cloud. The Google Drive API provides a limited set of policies, namely, “full access”, “read only access”, “metadata only access”, and “specific file access”. These policies are not applied per stored item, instead they are granted in the form of “permissions” to applications that want to access a specific drive. Before using a “drive”, an application requests from the drive owner one of the aforementioned permission types; the drive owner authenticates himself using a Google account and grants permissions using OAuth2.0. In most cases, the user that executes the application that requests permissions and the owner of the drive are the same entity. Permissions are granted in the form of a token that never expires: in order for a drive owner to remove permissions for a specific application, she has to revoke the token manually. Google Drive does not support integration with enterprise specific authentication and authorization systemsc.

In order for an application to perform an operation the following messages have to be exchanged (here we consider that the user executing the application is the drive owner, referred to as the consumer):
  1. 1.

    C o n s u m e rG o o g l e A u t h: Request permission

  2. 2.

    C o n s u m e rG o o g l e A u t h: Authenticate

  3. 3.

    C o n s u m e rG o o g l e A u t h: Grant permission

  4. 4.

    G o o g l e A u t hC o n s u m e r: Token

  5. 5.

    C o n s u m e rG o o g l e D r i v e: Operation, Token


Compared to our system the same number of messages is required. Nevertheless, messages 1 to 4 are usually sent once, since tokens never expire. It should be also noted that the entity that performs the authorization is the drive owner herself (the consumer), therefore authorization is a manual process.

5.3.2 5.3.2 Amazon S3

Amazon Simple Storage Service, or S3 for short, is a well-known Cloud-based file storage service. S3 provides Web services that allow users to store and organize their files in the Cloud. Files are organized in “buckets”. A user may set Access Control Lists (ACLs) that define the permissions that a user or a group of users have over a specific bucket, or over a specific file. ACLs are encoded in XML and the permissions that can be granted are “read", “write", “read ACL”, “modify ACL”, “full control”. For more fine grained access control, S3 provides an “access control policy language”, that allows users to create bucket-specific policies. These policies can control the access to a bucket, and its objects, based on user identities, source IP addresses, time and date, and some other parameters.

S3 provides an API that allows users (consumers) to be authenticated using their own (enterprise specific) identity provider. In order for an operation to be performed the following messages have to be exchanged:
  1. 1.

    C o n s u m e rI d e n t i t y P r o v i d e r: Authenticate

  2. 2.

    I d e n t i t y P r o v i d e rA m a z o n T o k e n S e r v i c e: Request Token

  3. 3.

    A m a z o n T o k e n S e r v i c eI d e n t i t y P r o v i d e r: Token

  4. 4.

    I d e n t i t y P r o v i d e rC o n s u m e r: Token

  5. 5.

    C o n s u m e rA m a z o n S3: Operation, Token


It can be seen that the same number of messages is required, as in our system. Nevertheless, in the S3 system the authorization is performed by Amazon and not by the identity provider, therefore access control policies have to be stored in an Amazon server. This, combined with the fact that policies are defined using Amazon’s specific policy definition language, creates a “lock-in” risk. Moreover, all the users who are identified by their own identity provider are considered to have the same role (i.e., “federated users”), limiting the flexibility of the access control policies. Finally, a secret has to be shared between the identity provider of the user and Amazon’s token service, in order for steps 2 and 3 to take place successfully.

6 Discussion

So far we have explored the possibilities that our solution offers in a “traditional” usage model: an enterprise that uses Cloud computing for outsourcing data storage and computations. However, the introduction of a new role, that of the ACP, and the decoupling of the data storage and access control assessment functions creates many new business opportunities.

One area that can benefit from our solution is that of B2B applications. Suppose that enterprise A wants to offer access to some of its (Cloud-based) services to a department of enterprise B. Enterprise B can expose a U R I ap that authenticates and authorizes the users of that particular department. Enterprise A can use this U R I ap in order to protect the shared services. With this, enterprise A can perform access control without learning anything about the internal user management system of enterprise B. Enterprise A may also offer services for the customers of enterprise B using a similar approach.

Our solution also creates a new business opportunity. We envision that a new market can arise due to our solution, that of the access control providers. In addition to the enterprise specific ACPs there can be independent ACPs that offer security services to end-users. Existing security companies can utilize their expertise to offer cutting edge access control services without investing in the Cloud market. Moreover, existing social networks may leverage their services and act as ACPs. To this end, future work for our scheme includes support for ACP federations and support for multiple U R I ACP definitions per single data item.

7 Conclusions

In this paper we proposed a solution to a thorny problem that prevents Cloud technology adoption: that of access control. The proposed solution enables data owners to outsource data storage and computation, without losing governance of their assets. In our solution access control is provided as a service by a new entity, the Access Control Provider (ACP). Access control as a service relieves Cloud providers from the burden of implementing complex security solutions and enables enterprises to deploy their own specific access control mechanisms. We demonstrated the feasibility of our scheme through proof of concept implementations. In particular, we implemented our system as an add-on for the open source Cloud stack OpenStack and we developed a Web application that allows the incorporation of our system in Google Drive. We show that our scheme is secure and has significant privacy properties. The proposed system adds minimal overhead, does not require any particular Cloud implementation or ACP structure and, therefore, it constitutes a realistic solution to the problem. Finally, we believe that the proposed solution can open the floor for new exciting applications and business opportunities.

8 Endnotes

a SAML is a generic XML language used for security assessments between different entities.

b Provided that this does not jeopardize the security of the scheme.

c Google provides a SAML based SSO system that can be used to integrate enterprise specific authentication systems, but only in Web applications.



This research was supported in part by a grant from the Greek General Secretariat for Research and Technology, financially managed by the Research Center of AUEB.

Authors’ Affiliations

Mobile Multimedia Laboratory, Department of Informatics, School of Information Sciences and Technology, Athens University of Economics and Business


  1. PwC: Global State of Information Security Survey (2012).
  2. Subashini S, Kavitha V (2011) A survey on security issues in service delivery models of cloud computing. J Netw Comput Appl 34(1): 1–11.View ArticleGoogle Scholar
  3. Gorniak S (ed) (2010) Priorities for research on current and emerging network trends. ENISA.
  4. Catteddu D, Hogben G (eds) (2009) Cloud Computing Benefits, risks and recommendations for information security. ENISA.
  5. Cloud Security Alliance (2013) The Notorious Nine Cloud Computing Top Threats in 2013.
  6. Armando A, Carbone R, Compagna L, Cuellar J, Tobarra L (2008) Formal analysis of SAML 2.0 web browser single sign-on: breaking the SAML-based single sign-on for google apps In: Proc. of the 6th ACM Workshop on Formal Methods in Security Engineering, 1–10.. ACM, New York, NY.View ArticleGoogle Scholar
  7. Somorovsky J, Mayer A, Schwenk J, Kampmann M, Jensen M (2012) On breaking SAML: Be whoever you want to be In: Proc. of the 21st USENIX Security Symposium, 21–21.. USENIX Association, Berkeley, CA.Google Scholar
  8. Fotiou N, Machas A, Polyzos GC, Xylomenos G (2014) Access control delegation for the cloud In: Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference On, 13–18.. IEEE, Canada.View ArticleGoogle Scholar
  9. Wang G, Liu Q, Wu J (2010) Hierarchical attribute-based encryption for fine-grained access control in cloud storage services In: Proceedings of the 17th ACM Conference on Computer and Communications Security. CCS ’10, 735–737.. ACM, New York, NY, USA.View ArticleGoogle Scholar
  10. Zhou L, Varadharajan V, Hitchens M (2011) Enforcing role-based access control for secure data storage in the cloud. Comput J.doi:10.1093/comjnl/bxr080,
  11. Li J, Zhao G, Chen X, Xie D, Rong C, Li W, Tang L, Tang Y (2010) Fine-grained data access control systems with user accountability in cloud computing In: Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference On, 89–96.. IEEE Computer Society, Washington, DC.View ArticleGoogle Scholar
  12. Yu S, Wang C, Ren K, Lou W (2010) Achieving secure, scalable, and fine-grained data access control in cloud computing In: INFOCOM, 2010 Proceedings IEEE, 1–9.. IEEE Press, Piscataway, NJ.Google Scholar
  13. OASIS (2013) eXtensible Access Control Markup Language (XACML) Version 3.0.22.
  14. Goyal V, Pandey O, Sahai A, Waters B (2006) Attribute-based encryption for fine-grained access control of encrypted data In: Proceedings of the 13th ACM Conference on Computer and Communications Security. CCS ’06, 89–98.. ACM, New York, NY, USA.Google Scholar
  15. Recordon D, Reed D (2006) OpenID 2.0: a platform for user-centric identity management In: Proc. of the 2nd ACM Workshop on Digital Identity Management, 11–16.. ACM, New York, NY.View ArticleGoogle Scholar
  16. Hardt D (ed) (2012) The OAuth 2.0 authorization framework. RFC 6749.
  17. Nunez D, Agudo I, Lopez J (2012) Integrating OpenID with proxy re-encryption to enhance privacy in cloud-based identity services In: Proc of the IEEE 4th International Conference on Cloud Computing Technology and Science.. IEEE Computer Society, Washington, DC, USA.Google Scholar
  18. Khan RH, Ylitalo J, Ahmed AS (2011) OpenID authentication as a service in OpenStack In: Proc. of the 7th International Conference on Information Assurance and Security, 372–377.. IEEE. (doi://10.1109/ISIAS.2011. 6122782).
  19. Yavatkar R, Pendarakis D, Guerin R (2000) A framework for policy-based admission control. RFC 2753.
  20. Durham D (ed) (2000) The COPS (Common Open Policy Service) Protocol. RFC 2748.
  21. Cantor S, Kemp J, Philpott R, Maler E (eds) (2005) Assertions and protocols for the OASIS Security Assertion Markup Language (SAML) v2.0. OASIS.
  22. Openstack homepage. last accessed 27 Apr. 2015.
  23. Google Drive homepage., last accessed 27 Apr. 2015.
  24. Google Keyczar homepage., last accessed 27 Apr. 2015.
  25. Google App Engine homepage., last accessed 27 Apr. 2015.
  26. Wang R, Chen S, Wang X (2012) Signing me onto your accounts through facebook and google: A traffic-guided security study of commercially deployed single-sign-on web services In: Proc. of the IEEE Symposium on Security and Privacy, 365–379.. IEEE Computer Society, Washington, DC, USA.Google Scholar


© Fotiou et al.; licensee Springer. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.