There was a steep rise in the number of security breaches in 2011. Part of it was due to an increase in detection and reporting, but these sophisticated attacks have definitely improved over the past few years. The sheer size of the public clouds often makes it easy for the hacker to find his way in and consequently harder to get detected.
Leasing and sharing the infrastructure comes with drawbacks, and a security threat could very well arise from within the cloud. Outside the secure enterprise firewall, the basic building blocks of security – authentication, encryption etc. have to be thought through all over again. The fact that someone else may hold the keys and passwords to your data can can greatly affect your service architecture.
With wider adoption and increasing awareness the cloud providers will have to rethink security. With better security initiatives, new audit standards (ISAE 3000) and accountability we will eventually experience a more secure cloud.
In 2011 a number of vendors launched “cloud” or “virtual cloud” editions of their traditional on-premise applications. Some analysts called it cloud washing (from brain washing), because these applications were not designed for cloud and missed some critical security and availability features.
As enterprises understand clouds better, the “real cloud” vendors will be more visible in 2012. The parameters around quality of service and enterprise SLAs will highlight the “real” cloud vendors from the rest.
The promise of “scaling linearly” has made the enterprises ride the cloud story much faster than they had planned. As data accumulates more and more data in the cloud, it opens new challenges to manage it. Zetabyte clouds are not unheard of and petabyte cloud are more common than before. And the challenge to scale is more complex for structure data, which has even more challenges in terms of quick access, transactional commits etc.
While technologies like HADOOP evolve with better offerings, the commercially available solutions aren’t mature enough yet to solve the real world big-data challenges. While most of large cloud players are still using something home grown, this year will see a big rise in commercially viable offerings which would help newer cloud providers scale better.
Another interesting trend to watch will the changing nature of cloud storage. Unlike compute the cloud storage doesn’t offer any classification on quality of service. The hierarchy within cloud, “less available” or “offline” option, will save cost and something we would likely see in 2012.
Public clouds are still considered external to an enterprise. This year will see better integration of cloud with enterprise resources like Active Directory for improved security, single sign-on and seamless access.
Integration between different cloud services like Salesforce.com will also be key for different enterprise teams to exchange data and collaborate.
The fear of single vendor lock-ins drove the need for some highly successful open standards like Open Storage, NDMP and NFS. With high adoption of Object based storage (AWS S3) and new no-SQL databases (AWS DynamoDB, Google BigTable), the new cloud players would be forced to make their solutions API-compatible with the leading players. This would be drive customer adoption and better cloud friendly designs, without the fear of vendor lock-ins.
Amazon is a role model for server-oriented architectures. As different cloud options become viable, new cloud application architectures may not restrict to using a single cloud backend for different service components.
Load balancing, fear of single vendor lock-ins and high availability of cloud infrastructure will open doors for cloud balancing. Vendors will look for ways to make sure they can build cloud services with what’s best available from different vendors and also use interoperability to balance the cloud between different cloud backends.