One of the topics most associated with cloud computing is its cost advantages, or lack thereof. One way the topic gets discussed is "capex vs opex," a simple formulation, but one fraught with meaning.
At its simplest, capex vs opex defines how compute resources are paid for by the consumer of those resources. For example, if one uses Amazon Web Services, payment is made on a highly granular level for the use of the resources, either time (so much per server hour) or consumption (so much per gigabyte of storage per month). The consumer does not however, own the assets that deliver those resources. Amazon owns the server and the storage machinery.
From an accounting perspective, owning an asset is commonly considered a capital expenditure (thus the sobriquet capex). It requires payment for the entire asset and the cost becomes an entry on the company's balance sheet, depreciated over some period of time.
By contrast, operating expenditure is a cost associated with operating the business over a short period, typically a year. All payments during this year count against the income statement and do not directly affect the balance sheet.
From an organisational perspective, the balance sheet is the bailiwick of the CFO, who typically screens all requests for asset expenditure very carefully, while operating expenditures are the province of business units, who are able to spend within their yearly budgets with greater freedom.
Summing this up, it means that running an application and paying for its compute resources on an "as-used" basis means the costs run through the operating budget (i.e., are operating expenditures, opex), while running the same application and using resources that have been purchased as an asset means the cost of the resources is a capital expenditure (capex), while the yearly depreciation becomes an operating expenditure.
It might seem obvious that the opex approach is more preferable. After all, just pay for what you use. By contrast, the capex approach means that a fixed depreciation fee is assigned no matter what use is made of the asset.
However, the comparison is made more complex by the fact that cloud service providers who charge on an as-used basis commonly add a profit to their costs. An internal IT group does not add a profit margin, so charges only what their costs add up to. Depending upon the use scenario of the individual application, paying a yearly depreciation fee may be more attractive than paying on a more granular basis.
The logic of this can be seen in car use. It's commonly more economical to purchase a car for daily use in one's own city, but far cheaper to rent a car for a one or two day remote business trip.
There is an enormous amount of controversy about whether the capex or opex approach to cloud computing is less expensive. We've seen this in our own business. At one meeting, when the topic of using AWS as a deployment platform was raised, an operations manager stated flatly "you don't want to do that, after two years you've bought a server." Notwithstanding his crude financial evaluation, clearly not accounting for other costs like power and labour, his perspective was opex vs capex, that the cost of paying for resources on a granular basis would be more expensive than making an asset purchase and depreciating it.
The move to private clouds added to the complixity of this. Heretofore, most organisations worked on the basis of one application, one server, so the entire depreciation for the server was assigned to one application, making the calculation of how much the capex approach would cost relatively straightforward.
This became further complicated with the shift to virtualisation, in which multiple applications shared one server. Now yearly depreciation needed to be apportioned among multiple applications. This could be even more complex if one attempted to apportion the cost according to something other than assigning cost by dividing the cost by the number of VMs on the machine. Trying to assign cost on the percentage of total memory used by an application, or processor time requires instrumentation and more sophisticated accounting methods, so most organisations just work on a rough "X dollars, Y number of VMs, each one costs X divided by Y."
Today, though, organisations using compute resources don't want to pay a flat fee. After all, they may have transitory use, spinning up resources for a short term test or a short-lived business initiative, why should they commit to a five-year depreciation schedule?
Resource consumers expect to pay on an operating expenditure basis, as that's what's out there in the market. They want to pay only for what they use, no matter who the provider is.
IT organisations are intrepidly preparing for this world, implementing private clouds and moving toward granular pricing of resources, a task made difficult it must be admitted, by the fact that most IT organisations do not have accounting systems designed to support detailed cost tracking.
So it will be the best of all worlds, resource consumers getting granular, use-based costing, IT organisations providing private cloud capability with support for sophisticated cost assignment, and no provider profit motive imposing additional fees beyond base costs.
Or will it?