Arthur C. Clarke’s novel The City and the Stars voiced two different possibilities for a future society. One was a city run by computers, totally detached from the physical outside world, forming its own microcosm. The other was a society living in harmony with the physical world, but augmenting it with technology. A generation ago there was great fear that our future would be a sterile world run by computers, like the detached microcosm city, and we would lose something human. But modern developments, concerns for the environment, human engineering …etc. have tended to lean towards the other option – humans developing technology that takes its cues from the real world.
Interfacing technology via the cloud is an example of this. Car manufacture Volvo envisions a system where information from a car gets distributed via a connection on the cloud. If your car encounters slippery road conditions, traffic, or anything else of concern to motorists the car’s cloud connection passes the information to others nearby who might be using the same roads.
This is not really so much a new concept as the development of an old one. We’ve had traffic reports for years, and we have apps on smartphones where individuals can look up road and weather information. But this approach is more integrated; the information goes straight to the car and driver who doesn’t have to use the phone while driving, or listen to a possibly relevant radio report. It means navigation systems on cars can give the most economical route to a destination at a particular point in time rather than a route that would be best under ideal conditions. This information to the car is current, integrated with the car’s systems, and has far greater detail than before.
Or course it is only as good as the people who use it. If there are only a few cars and drivers with the system then they have to hope some other driver has already gone the route ahead of them if they are to gather any useful information. Whereas if the majority of people have the cloud accessing systems there will always be relevant information as long as there is some traffic on the road. Any driver can benefit from the experience of another driver, even if the experience was only moments before.
Undoubtedly this could have an impact on insurance. Hopefully it will prevent a few incidents, but even if it doesn’t stop all problems it might help if we know that the drivers were at least complying with up-to-date information and following a recommended path.
Technology with this approach adapts to the world, which is also a world that we have adapted to us, having built cities, road and other technology. Information that these systems convey is far more extensive than before, but it is never complete or final, as the outside world is always changing. Any concern we once had of being isolated in our own stagnant world now seem unfounded. In an infinitely complex and changing reality there will always be constant change in how we adapt to it.


We get the impression that going to the Cloud means signing up and getting the payment and login details. Of course this is false. We have no end of articles telling us about teething problems, or being better off with hybrid or private or public cloud or anything other than what we had before. But there are still more things to consider, and that means wading through a lot of figures and making a few worrying decisions. And actual experience might change a few decisions and opinions too.
The speed will vary according to which cloud server that you use, both the company and the options within that company. Bigger machines can be slower, and the reasons for this vary. This is complicated further by the fact that different test programs can run quickly in some situations but less quickly in others, and not run at all in the case of some servers who aren’t set for that particular software.
It would make sense if the more expensive services were faster, or had some other advantage; but this is not always the case. More CPU’s will be faster, all else being equal, but while 8 CPU’s will be faster than One they are unlikely to be 8 times as fast. Windows Azure machines were more than twice as fast when they used the 8 CPU’s option instead of one, but that doubling of speed came at 8 times the cost.
If you increase the number of CPU’s but keep the same amount of RAM you might save some cost. But this can affect the performance in unpredictable ways. Sometimes a 2 CPU Machine is faster than a single CPU machine with the same RAM, but not by a large amount (perhaps 30%). Sometime the 2 CPU is actually slower.
Google Cloud seems to have a fair way of tallying their performance with price, so that their expensive options do have significantly better performance. How that compares to the options offered by another company is another matter.
To make matters more confusing the performance varies over time even with the same system and provider. If you’re using Cloud you are sharing resources with other users, because that’s what Cloud is about. If a lot of computing power is being used by a lot of other groups, then you don’t get any special consideration; things will be slower. Occasionally this works to your advantage and you get bursts of high speed interaction, but only at off peak usage times. If you only want lots of computing power for short periods of time this might be fine. Constant usage, however, will probably mean fluctuating speeds.
Fluctuating Cloud speed might make a smaller provider more attractive, or a super large provider with greater resources; but you need a lot of CPU’s and RAM to achieve significantly higher speeds. Even then it’s had to know which option is best, as speed still varies with the application, and will fluctuate with user traffic. You can’t know these things till you (or somebody else) has tried the service, and even then it might degrade if they accumulate more clients, or improve when they decide to upgrade.


– Security will always be a concern. But this is most noticeable in a time of transition, specifically a business’s transition to the public cloud. A mixture on new and established approaches will be used and modified/developed for the cloud environment. There will always be hackers and other threats, but the cloud will end up at least as safe as previous computing models.
– Hybrid cloud, the combining of public and private, will be common. Hybrid cloud is not a set template, so each company will have their own combination that integrates the two forms. The solution provider will need the skills to do this, but there should be plenty of work for those who are capable.
– Mobile apps already allow employees to work from anywhere. Having half a dozen enterprise apps or more on your smartphone will soon be quite regular. Offices will become further decentralized as employees work from wherever they are.
– The phone and IT systems may merge; mergers and acquisitions between companies may be common, and hopefully mean well integrated services.
– The internet of things has been imminent for a while. If it seems to progressing slowly it may be because of the huge range of items and services that may come about. And the fact that the changing computer landscape means nothing can be finalizes till standards are set. Watches and Google glasses have been selling for a while. Other wearables such as clothing with sensors also exist. Expect home appliances, furniture, healthcare, manufacturing and so much else to be net connected.
– Many tends followed or predicted the past have proved false, often comically so. We tend to notice either the true patterns or the laughably false ones at any point in time, and draw attention to one or the other. Really, some patterns prove accurate, some do not. The fact that the false predictions are permanently recorded on the net proves embarrassing; the true predictions are also recorded, but just seem obvious in hindsight.


Data integrity has to be an important issue for everybody. Sure, some things may be more important than others, but does anybody bother to store data that isn’t of some importance? The matter is important for the reputation of the cloud providers. Even if the things they lost last time were somehow unimportant, wouldn’t we take that as a warning not to trust them next time?
Cloud providers tend to balance cost effectiveness of services with the quality of the services; it’s the same for a lot of business situations. There is a general trend for the client’s level of protection to vary with the type of service. IaaS (Infrastructure as a service) provides a means of creating a cloud environment, but data backup may not be included. PaaS (Platform as a Service) will have more selling points, usually including data protection. But this varies greatly between providers. Some IaaS providers have good solid data protection options, though you will pay a little more for this security.
There is data on the Cloud that is part of the provider’s operations that does not affect the customer to any real degree. Extreme loss here might jeopardise the provider’s business, but really their ability to protect this data is only a concern to us in as much as is indicates their general level of security.
Of more concern is data loss that affects either the provider and the client, or only the client. The first category would include environment matters, Configurations, virtual networking, Provisioning management …etc. This is not the data provided by the client but the packages it is stored in; it’s like losing your word processor program. The Second category is the client’s information, the data they put on the cloud. This is like the words typed into the word processor, but not the program itself. Loss of this is what we are most concerned with.
Really it is the customer who is most responsible for this data. Providers vary in what they offer, so the client has to choose the best options available. There was certainly data loss before the Cloud, and we took measures against it then. We need to be equally active now, or at least use a provider who is active for us. Some measures include:
Disk Level data Protection: An old but effective practice.
Constant backup: Periodically backing up data to a lower cost medium. Somebody has to decide how often to do this, and understand that recent updates will be lost, but otherwise this is a tried and true system. Memory of several Terabytes is quite cheap these days.
Data replication: Another older idea that stays in use because it works well. Software sends all the data to two different storage mediums. But check the ability to retrieve the data from secondary resources.
Journaled/checkpoint based replication.

The cloud infrastructures and provider services are important to look at, but we must remember that the main reason for downtime remains human error. One human error is simply choosing something that isn’t the best suited option.



A rapidly changing technology can leave opportunities for exploitation; authorities cannot always predict potential problems, meaning there is a brief window between a criminal discovering a security issue and authorities finding the best way to deal with it. Occasionally a general idea of future problems can be investigated in advance. Online attempts at people’s lives are one of several issues under investigation.
Assassinating somebody over the internet is not too far-fetched. Medical devices such as pacemakers and defibrillators can have wireless controls. Hacking into that control could allow a terrorist to assassinate a person from a great distance. And it might even pass as death from natural causes. Us politician Dick Cheney had his wireless function removed from his defibrillator for this very reason.
The Internet of things (IoE) may present countless opportunities to be exploited by criminals. A few possible means of attack can be predicted in advance, and countermeasures taken; but there may be problems not apparent till it is too late. Anything medical can be a risk. Just finding an individual’s allergy and poisoning them is not difficult, though this requires some physical interaction and not just the computer access. But controlling thermostats and freezing people to death, or suffocating them by shutting down the air conditioners while they sleep; these are feasible under some conditions.
International boarders and the world wide reach of the net have had more than a few clashes over the years. What is legal in one country can be criminal in another, causing issues if a person from the first country performs action s over the net that affect a second country. But even if an action is illegal in both countries there are the jurisdictions of the country’s authorities to consider.


Cloud Advantages and Disadvantages

A particular piece of software can appear to be a cheaper option on the cloud, compared to the same software run in-house. But is it the same? It might be a stripped down version, missing some features. Do you need all the features; maybe the stripped down version is better for you, or maybe something is missing. Is it an updated an improved version, but incompatible with the other software that you have on your own system. If you buy into somebody else’s cloud there tends to be compatibility between the different systems there (tends to be, not always though); but it may not be compatible with some package you have in house. Even moving text from a movie-script to a word package can result in pages of unformatted sentences – no paragraph breaks, no spaces, just one long lump of words. And that is a minor example.
Inflexibility. This need not be too much of an issue if you understand your own requirements in advance. There are so many cloud possibilities that you should be able to find one that suits your business needs. Else, you can have a private cloud that is set up the way you want. Transferring all your old files will be an issue, but remember that you are making a major upgrade, and that major upgrades are a step ahead and essential lest you be left behind. Your previous system is soon to be an obsolete system. Being locked into a system is an issue, however. The solution is a combination of flexibility and being locked into a system that works well.
Security issues. These receive a huge amount of publicity. Probably less common that the media might have us believe, but truly disastrous when they do occur. There is an advantage here that smaller businesses have with the cloud; the cloud provider can provide more security for a group of small companies than a single company could afford on their own. And if your company has a system on the net at all (even pre-cloud) is has some security concerns. Watch for situations where information is moved automatically. The celebrity cloud photo hacks seemed to be of photos that were automatically backed up to the cloud. If it was never on the cloud in the first place it is far less likely to be leaked. Also remember, you may have to convince your clients that the cloud is secure and not just yourself.
Possible downtime. Not a regular occurrence, but it is a fact of life. Can you afford to be offline for long? Can at least some downtime be scheduled in advance? There should be regular maintenance. Look for systems with redundancy that let you access data all the time; they run at least two systems concurrently and let one update the other. Also look for a minimal downtime guarantee.


With the cloud we can connect large pools of resources via a network, significantly reducing costs if we plan things well, and scaling use to suit our needs. But there differences in public, private or hybrid clouds.

Public clouds are owned by a third party. There is the advantage of sharing in something that has been bought in bulk, reducing costs. There is also the advantage that you only pay for what you use, and you can expand and pay for more use quite quickly. Your customers can also have this advantage, expanding rapidly if needed but only paying for what is actually used. This system is vastly different to the previous non-cloud model where companies had to estimate in advance what storage space was needed; overestimation meant paying for too much, underestimation meant missed opportunities and a mad rush to upscale the operation.
This disadvantage of this public facility was that users had that public’s infrastructure to work within, which may or may not have been what they wanted. However, with multiple third parties offering public clouds companies can usually find one that fits their needs.

Private cloud is built exclusively for an individual enterprise. The company can completely control the infrastructure and resources, and do whatever they wish to handle security. An on-premise private cloud allows complete control of the system, but the physical hardware and its storage capacity have to be estimated and purchased in advance, limiting the fast scalability that was one of the attractions of the cloud in the first place. Still, company employees can share resources and files efficiently.
Externally hosted Private Clouds are hosted by an external cloud provider. Unlike the public cloud there is a fair guarantee of privacy and the infrastructure can be set up to the individual business’s needs.
Hybrid clouds combine the advantages of private and public. Scalability can be external and provided as needed, but the system can use an infrastructure that the business find appropriate. There is more than one type of hybrid, so options vary considerably. It is possible for companies to use a public cloud to spill over any needs not met on their private cloud, and update the private facilities latter on.


Google’s cloud platform will be modified to make it more compatible with windows licenses and applications. As well as the obvious convenience of moving windows apps to the new cloud platforms, and the advantage of using familiar programmes, the compatibility of cloud means there will be no additional licencing fees for those already using windows licences.

A large portion of today’s business needs run on windows, and Google wants to cater to this cloud. Other cloud platforms have already had windows compatibility, but there is a desire with some companies to be present on more than one platform. One reason for this is that the companies want to avoid being locked into one service provider, where fees can be considerable. Using one vendor makes any future changes difficult, as everything has to be moved. Multiple platforms means that only the contents of one platform needs to be moved during a change. The other reason for multiple platforms is to take advantage of the different services each platform offers.

It has already been announced that Windows server 2008 R2 Datacentre Edition is running on the Google cloud platform, with the latter Windows server 2012 and 2012 R2 coming in the near future. SQL server and sharepoint and exchange server are already transferable to cloud to those who have already paid their Microsoft fees, without additional cost. And customers are also being offered free use of the popular chrome RDP app, which has been optimized for cloud.

Google has come a little later to the cloud hosting game, but if the customer’s companies do want to spread themselves across platforms then they may not be too disadvantaged. Google is a large group in its own right. Customers wanting several cloud platforms will certainly consider including Google along with the other bigger names.



Find the right cloud Provider for you.
There is platform as a service; Infrastructure as a service, software as a service, backup and disaster recovers… etc. You may need all of these, or just some. Just because it is all called cloud does not mean you need all that’s there.
Shadow IT. Prevent multiple versions of the same solution. Innovations are good, incompatible versions are not.
Assess which application to migrate. Consider compatibility, interoperability.
– You can re-host the same application as before, which makes for a quick move, but you will not be optimising the application for the cloud. Consider upgrading as you move.
– Revising an application to suit the Cloud and changing business requirements will take some time, and risks incompatibility if you don’t update everything else at the same time; yet everything will need updating eventually.
– Consider application as given by the service provider, they probably know their own cloud system well.
Realize that moving to the Cloud system might mean being locked in for an extended period of time. Ask is this good or bad in the long term. Are you going to be locked in by penalties?
Always think of Cloud as a move forward, but also a total change in approach. It’s not a better version of the old way of doing things; it is a new way entirely. Consider making changes to the applications and the architecture in order to take advantage of the Cloud system.
Look for redundant issues, including licenses
Don’t limit yourself to short term thinking. Your business should expand; Cloud should have room for this and more.
Find what other companies have done, but realize that they all did it differently. Listen to advice, but find what works for you, and what you what your system to do in the future.

Azure in Australia

Microsoft has announced data centres in Melbourne and Sydney for Azure cloud services. They have also announced Azure app stores and connections dedicated to the new centres. These dedicated connections ensure a high quality of service. Previous Australia’s connections to Azure in Singapore were plagued by slow connection speeds because to the distances involved.

Apart from the availability of Azure the new centres would provide a greatly increases economy of scale, keeping prices down. Each location can accommodate about 600, 000 servers. The fact that there are two data centres means information can be stored at two locations, providing backup.

Previous Australian Business use of Azure meant there were issues about data being offshore. Information time delays were minor compared to varying security and privacy laws in different political jurisdictions. Some Australian companies preferred to keep their information storage local, even if it meant a lesser-known, more expensive provider.

Azure cloud offers many advantages for users. It uses the familiar windows platform and is easily scalable for a very small to vary large number of users. The fact that a business only needs s to pay for what is used means there is no redundant hardware outlay, further reducing running costs.

A recent Azure outage in November had no effect on Australian servers. The outage was found to be in a configuration change recently introduced into the Azure system.