Cloud Computing, ownership matters

CheckIn the past fifteen years, many internal IT departments of enterprises evolved from artisan organizations that only assembled and provided customized, tailor-made products, to hybrid craft and mass production organizations that provides custom as well as standard products. But nowadays these IT departments are confronted with external organizations that deliver standard services and products that can be easily adapted to the needs of the customer based on the concept of “mass customization”.

Instead of buying, owning, and managing all kinds of IT components by your self, nowadays the IT infrastructure is offered as a service by means of cloud computing.

There is a shift from “goods dominant logic” to a “service dominant logic”, were the focus is shifting away from tangibles and toward intangibles. This trend is supported by the use of virtualization technology for server, storage and network devices and also for applications.

The cloud computing offering of lower costs, shorter time to market, and on demand provisioning makes it very tempting for organizations to outsource their IT infrastructure and services.

But don’t we forget something? One of the most important things of information processing is that an organization has the right controls over the use of applications, data and infrastructure. Incomplete control can lead to all kinds of issues about business obligations and liabilities.

The control of these items is arranged by contracts, which is in fact an exchange of property rights. These property rights are a bit complicated because they have several dimensions:

  • The right of possession
  • The right of control
  • The right of exclusion (access rights)
  • The right of enjoyment (earn income from it)
  • The right of disposition (buying or selling)

The consequence of these different dimensions is that different parties are able to hold partitions of rights to particular elements of a resource. On top of this there is the issue of legislation. When we talk about ownership we have to be careful because in legal systems ownership is based on tangible/physical objects. And yes of course, we have legislation about intellectual property, trademarks, etc. but when it comes to virtualized objects it becomes murky. Also cloud computing is about delivering services (intangibles) not about goods (tangibles).

The transition from “goods dominant logic” to a “service dominant logic” is a mind shift where the “bundle of rights” or property ownership still matters.

Signing cloud computing deals is not only about money and provisioning it is also about control. When a cloud computing sourcing deal is taking place the partitions of property rights should be grouped into appropriate bundles to stay in control.

Advertisements

Mobility and Cloud Computing and the need for a new security concept

Mobility and Cloud Computing ask for a new approach to structuring and organizing security.

The current modern way of doing business and service provisioning is based on openness and agility.Restricted
This brings the traditional security concept in which an organization is positioned as an “information fortress” strongly under pressure. The traditional perimeter is vanishing and sensitive information travel outside your organization in many ways on many different devices. Mobility and Cloud Services is the name of the game.

For the most part business information is stored as unstructured data. Unstructured data refers to information that either does not have a pre-defined data model or is not organized in a pre-defined manner and is usually stored in files instead of databases.

This context creates a strong need for other security concepts. Instead of securing a predefined and fixed “location” the focus must be shifted to the actual security of information that is mobile; information security at the file level. Or in other words file protection and control beyond the perimeters of the enterprise because the rule set is stored with the file itself.

There are security products on the market that makes it possible to achieve security at the file level. Applying this security concept will affect the current and usual roles and activities related to security.

Still it is common that a security officer in cooperation with the system administrator ensures safe work areas to which firewalls, passwords and malware scanners protect access. The user organization is consulted and her voice, the voice of the customer, is found in the general security policy but the focus is on the security of the IT infrastructure. So bottom line security is an IT issue.

By implementing a security concept based on information security on the file level, one comes in very close contact with the daily operational activities of the users and the organization. Security choices have a much more immediate and greater impact on the work and activities than in a traditional security concept.

Who has when and where access to which information? The access management can be excellently managed by using the new file access management tools. But access management is not an isolated topic. Access is granted to individuals and groups, thus the topic of Identity Management comes in sight and should be very well organized. Access management is part of the triple A; authentication (identity management), authorization (access management) and accounting. These are well-known concepts in the security world but they were mainly applied to the level of IT infrastructure. Now that it is urgently needed to shift the focus of the security of the IT infrastructure to the security of information, one comes in very close contact with the daily work of the organization. This includes authorization, as well as authentication and accounting.

In the traditional division of labor a security officer defines a security policy, based on the input from the user organization and rules and legislation, and then align with the IT organization what mechanisms will be used to operationalize this policy. Product selection, technical equipping and the daily operation are performed by the IT organization. Thus the questions about why, how and with what are satisfied. In fact the user organization is only involved with the first question and it plays no role in the implementation. Also the security officer is hardly involved in the daily operations of the IT organization and the user organization. It is a slow top-down approach.
In fact in terms of a responsibility assignment matrix (RACI), one could say that the security officer is accountable, the system administrator is responsible and the user organization is consulted and informed.

The application of information security through access management at the file level puts this traditional work under pressure. The granularity of the access rights on file level is very fine-grained, in combination with the dynamics of working processes, the needed agility, and the shortening of time to deliver, is conflicting with the traditional hierarchical top down approach.

So to solve this issue a different way of working is required, where the user organization should be much more involved in information security or better stated they should in fact take the lead. After all it affects their daily work. The security officer, the auditor, and the IT organization must be well aware about the daily work of the user organization. They should gain knowledge about what is going on in the workplace to ensure that the access management is workable, that it produces the desired result, and meets the expectations of the user organization. Additionally, identity management and accounting at this micro level should also be taken into account.

To get information security by means of access management at the file level it is advisable to take a closer look at the different roles that are involved. These are the security officer, the auditor, the system administrator and the “super user” or functional administrator of the access rules for the user organization.

• The security officer is dedicated to define the information security policy and with which mechanisms (the solution approach regarding the security organization, workflow and technology) this should be realized. On behalf of the user organization.
• The auditor is the one that shows how accounting should take place (what are the control points, which information should be captured to comply with laws and regulations, what is the audit trail) and executes audits.
• The system administrator is the one that operationalizes the access management within the framework of the security officer and the auditor, and also takes care of the relationship (in terms of technology and execution) with the work areas of identity management and monitoring (accounting).
• The “super user” or functional administrator of the user organization is the one that actually manages the access rules within the framework of the security officer and the auditor.

To support the modern way of doing business and service provisioning we need to create an agile security organization, with a transparent separation of interests and responsibilities. Instead of a hierarchical security organization, a flat organization is needed where the “super user” is accountable, the system administrator is responsible, the security officer and auditor are consulted and the end-users get informed.

The Connection between Enterprise Architecture and Data Centers

What is the connection between enterprise architecture and something technical as data centers?

In one of his blogs Tom Graves, a well known enterprise architect, is discussing the needs of enterprise architect clients.

According to Tom: “What paying-folks in business and elsewhere want from enterprise-architecture and suchlike is very rarely about theory or anything of that kind – on frameworks, reference-models, schemas and so on. Nope. What they want from us is practical answers to practical business-questions – almost nothing more than that” (emphasis by Tom Graves).

I would add to this that enterprise architecture is all about decision support and gaining insight in certain organisational issues. Depending on the issues (and the main objective one wants to achieve) we can define different work domains with different goals and timelines as shown in the figure below.

EAWorkdomains

The enterprise architect is the one who should help an organization to operate as one, by working towards a common shared vision supported by a well orchestrated set of actions, is to have the capability to create, implement and maintain a coherent enterprise design also known as an enterprise architecture.

The promise of enterprise architecture is that designing an enterprise by applying systematic rational methods will produce an enterprise that is capable to pursues its purposes more effectively and efficiently.

Enabling organisations to make better-informed decisions.

One of the main activities (and one of the main differentiators with other architects) of the enterprise architect should be to help to create insight in the relation between developments in technology and society, and the organization and the impact it has on the strategic, tactical and operational work level of the organization.

Inform, communicate and facilitate are the verbs of the enterprise architect.

Walk the talk

Based on these ideas I wrote a book about data centers. Why?

The last few years the focus of data centers was mainly on IT efficiency. But currently there is much is more at stake than ordinary operational technicalities. Something that not everyone is aware of.

In large parts of the world, computers, Internet services, mobile communication, and cloud computing have become a part of our daily lives, professional and private. Information and communication technology has invaded our life and is recognized as a crucial enabler of economic and social activities across all sectors of our society. The opportunity of anytime, anywhere being connected to communicate and interact and to exchange data is changing the world.

So, during the last two decades, a digital information infrastructure has been created whose functioning is critical to our society, governmental, and business processes and services, which depend on computers. Data centers, buildings to house computer servers along with network and storage systems, are a crucial part of this critical digital infrastructure. They are the physical manifestation of the digital economy and the virtual and digital information infrastructure, were data is processed, stored, and transmitted.

Given the fact that we are living in a world with limited resources and the demand for digital infrastructure is growing exponentially, there will be limits that will be encountered. The limiting factor to future economic development is the availability and the functioning of natural capital. Therefore, we need a new and better industrial model.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.

Therefore organisations have to rethink the “data center equation” of “people, planet, profit”.

With a Cradle-to-Cradle approach we can transform current production systems with linear material flows (make take waste) to production systems with circular materials flows (reuse, recycle and recover). In order to prevent resource depletion, disposal of valuable materials and harmless emissions.

A sustainable data center should be environmentally viable, economically equitable and socially bearable. The combination of service-dominant logic and cradle-to-cradle makes it possible to create a sustainability data center industry.

Data Center 2.0: The Sustainable Data Center is an in-depth look into the steps needed to transform modern-day data centers into sustainable entities (ISBN 978-1499224689).

DC20_Cover_small

Sourcing IT: Cloud Computing Roadblocks

Roadblocks

Cloud computing, which is part of the widespread adoption of a Service Oriented Business approach, becomes pervasive, and is rapidly evolving with new propositions and services. Therefore organisations are faced with the question how these various cloud propositions from different providers will work together to meet business objectives.

The latest cloud computing study of 451 Research showed some interesting key findings:

  1. Sixty percent of respondents view cloud computing as a natural evolution of IT service delivery and do not allocate separate budgets for cloud computing projects.
  2. Despite the increased cloud computing activity, 83% of respondents are facing significant roadblocks to deploying their cloud computing initiatives, a 9% increase since the end of 2012. IT roadblocks have declined to 15% while non-IT roadblocks have increased to 68% of the sample, mostly related to people, processes, politics and other organizational issues.
  3. Consistent with many other enterprise cloud computing surveys, security is the biggest pain point and roadblock to cloud computing adoption (30%). Migration and integration of legacy and on-premise systems with cloud applications (18%) is second, lack of internal process (18%) is third, and lack of internal resources/expertise (17%) is fourth.

It looks like that many organizations believe in a fluent evolution of their current IT infrastructure towards a cloud computing environment. Where on the other hand, right now, organisations are facing significant roadblocks.

Remarkably in the top four of roadblocks that are mentioned in this study, one very important roadblock is missing.

The cloud computing service models, offers the promise of massive cost savings combined with increased IT agility based on the assumption of:

  • Delivering IT commodity services.
  • Improved IT interoperability and portability.
  • A competitive and transparent cost model on a pay-per-use basis.
  • The quiet assumption that the service provider act on behalf and in the interest of the customer.

SiloBuster2

So with cloud computing you could get rid of the traditional proprietary, costly andinflexible application silos. These traditional application silos should be replaced by an assembly of standardised cloud computing building blocks with standard interfaces that ensures interoperability.

But does the current market offer standardized cloud computing building blocks and interoperability?

Commodity

Currently the idea is that cloud computing comes in three flavors. This is based on the reference model of the NIST institute [1]:

  1. Cloud Software as a Service (SaaS); “The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email).”
  2. Cloud Platform as a Service (PaaS); “The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.”
  3. Cloud Infrastructure as a Service (IaaS); “The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.”

Each standard service offering (SaaS, PaaS, IaaS) has a well-defined interface. The consequence of this is that the consumer can’t manage or control the underlying components of the platform that is provided. The platform offers the service as-is. Therefore the service is an IT commodity service, customization is by definition not possible [2].

But is this a realistic picture of the current landscape? In reality the distinction between IaaS, PaaS, and SaaS is not so clear. Providers are offering all kind of services that don’t fit well in this 3 flavor scheme. Johan den Haan, CTO of Mendix, wrote a nice blog about this topic where he propose a more detailed framework to categorize the different approaches seen on the market today.

Besides a more granular description of cloud computing services, a distinction is made between compute, storage , and networking. Which aligns very well with the distinction that can be made from a software perspective; behavior (vs. compute), state (vs. storage), and messages (vs networking). The end result is a framework with 3 columns and 6 layers as showed in the image below.

Cloud Platform Framework. Courtesy to Johan den Haan

Cloud Platform Framework. Courtesy to Johan den Haan.

  • Layer 1: The software-defined datacenter.
  • Layer 2: Deploying applications.
  • Layer 3: Deploying code.
  • Layer 4: Model/process driven deployment of code.
  • Layer 5: Orchestrating pre-defined building blocks.
  • Layer 6: Using applications.

 While layer 2 is focused on application infrastructure, layer 3 shifts the focus to code. In other words: layer 2 has binaries as input, layer 3 has code as input.

The framework shows the complexity organisations are facing when they want to make the transition to cloud computing. What kind of interfaces or API’s are offered by the different cloud providers are they standardized or proprietary? What does this means for migration and integration?

Interoperability

The chair of the IEEE Cloud Computing Initiative, Steve Diamond[3], stated that “Cloud computing today is very much akin to the nascent Internet – a disruptive technology and business model that is primed for explosive growth and rapid transformation.“ However, he warns that “without a flexible, common framework for interoperability, innovation could become stifled, leaving us with a siloed ecosystem.”

Clouds cannot yet federate and interoperate. Such federation is called the Intercloud. The concept of a cloud operated by one service provider or enterprise interoperating with a cloud operated by another provider is a powerful means of increasing the value of cloud computing to industry and users. IEEE is creating technical standards (IEEE P2302) for this interoperability.

The Intercloud architecture they are working on is analogous to the Internet architecture. There are public clouds, which are analogous to ISPs and there are private clouds, which an organization builds to serve itself (analogous to an Intranet). The Intercloud will tie all of these clouds together.

The Intercloud contains three important building blocks:

  • Intercloud Gateways; analogous to Internet routers, connects a cloud to the Intercloud.
  • Intercloud Exchanges; analogous to Internet exchanges and peering points (called brokers in the US NIST Reference Architecture) where clouds can interoperate.
  • Intercloud Roots; Services such as naming authority, trust authority, messaging, semantic directory services, and other root capabilities. The Intercloud root is not a single entity, it is a globally replicated and hierarchical system.
InterCloud Architecture. Courtesy to IEEE.

InterCloud Architecture. Courtesy to IEEE.

According to IEEE: “The technical architecture for cloud interoperability used by IEEE P2302 and the Intercloud is a next-generation Network-to-Network Interface (NNI) ‘federation’ architecture that is analogous to the federation approach used to create the international direct-distance dialing telephone system and the Internet. The federated architecture will make it possible for Intercloud-enabled clouds operated by disparate service providers or enterprises to seamlessly interconnect and interoperate via peering, roaming, and exchange (broker) techniques. Existing cloud interoperability solutions that employ a simpler, first-generation User-to-Network Interface (UNI) ‘Multicloud’ approach do not have federation capabilities and as a result the underlying clouds still function as walled gardens.”

Lock-in

The current lack of standard cloud services with non proprietary interfaces and API’s and the missing of an operational cloud standard for interoperability can cause all kinds of  lock-in situations. We can distinguish four types of lock-in [2]:

  1. Horizontal lock-in; restricted ability to replace with comparable service/product.
  2. Vertical lock-in; solution restricts choice in other levels of the value chain.
  3. Inclined lock-in; less than optimal solution is chosen because of one-stop shopping policy.
  4. Generational lock-in; solution replacement with next-generation technology is prohibitively expensive and/or technical, contractual impossible.

Developing interoperability and federation capabilities between cloud services is considered a significant accelerator of market liquidity and lock-in avoidance.

The cloud computing market is still an immature market. One implication of this is that organisations need to take a more cautious and nuanced approach to IT sourcing and their journey to the clouds.

A proper IT infrastructure valuation, based on well-defined business objectives, demand behavior, functional and technical requirements and in-depth cost analysis, is necessary to prevent nasty surprises [2].

References

[1] Mell, P. & Grance, T., 2011, ‘The NIST Definition of Cloud Computing’, NSIT Special Publication 800-145, USA

[2] Dijkstra, R., Gøtze, J., Ploeg, P.v.d. (eds.), 2013, ‘Right Sourcing – Enabling Collaboration’, ISBN 9781481792806

[3] IEEE, 2011, ’IEEE launches pioneering cloud computing initiative’http://standards.ieee.org/news/2011/cloud.html