Recently I posted a small piece of my thoughts on cloud computing in mailing list ( http://groups.google.ca/group/cloud-computing/browse_thread/thread/86ee2f8f4afd71c1# ) on patterns in cloud computing, comparing on demand computing resource pools to human resources.
Indeed, both target same goal: descrease fixed costs of ownership and have more space for manoeuvre. Though it has some differences: computing resources (CPU, RAM, disk storage etc) are of known quality, while people (consultants) not always.
Another thing, that was lurking in my mind, is that if you compare compute cloud to an Earth subsystem like hydrosphere (which is actually fed by precipitation from atmosphere, read clouds), then it has similar properties like streams, pollution, draining, freezing etc.
Freezing of storage resources might be compared to loss of elasticity, like inability of some storage space to efficiently store optimal amount of data. For example 1Gb of storage has 10 copies of same file (or slightly different versions) under different name. I also see some dynamic process eating RAM, which reminds me of water changing physical state (steam or geyser, if you want).
This might be solved by process of deduplication, so storage unit gets back its original properties (defragmentation comes to mind as well).
I don't know yet, whether these restoration/optimization processes should belong to system (compute cloud solution) or to the entities, that run inside cloud, future will show.
Jumping aside, I would also note a popular Observer pattern, used in publish/subscribe system, that I suggested to use for cloud implementations in some other posts to the same discussion list, for the purpose of reducing bandwidth/resources consumption.
I'll get you informed further in this thread. If you think, that I'm crazy - comments are welcome.