Reinventing Enterprise Application Development
Containers are the latest shiny objects in IT. They got their start as a better, easier, faster way to create “born on the web” applications, and quickly spread throughout the developer community. Now CIOs are starting to take notice. The reason? Containers not only speed the development of new applications, but they have the potential to exponentially streamline operations, reduce cost, and make enterprise IT as agile as a startup.
In other words, containers are hot because of their impact on business goals. In fact, it’s not uncommon for a CIO to announce that they have just established a container initiative, and at the same moment ask if we could please help them understand what containers are all about. Committing to something you don’t really understand is not usually the best way to get ahead in life, but in this case it underscores that the decisions are being driven by the anticipated impact containers will have on the business rather than an infatuation with the latest technology.
How to put containers into practice, however, is not a simple answer. There’s more to it than simply choosing container software, and a lot of it depends on the applications and environment at hand. Not all applications – or situations – should be treated the same.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Let’s explore what it takes for containers and these emerging enterprise cloud computing models to increase speed and agility for new application development, increase flexibility and lower cost for legacy applications, and achieve the business outcomes you’re looking for.
First, some container basics. Containers make applications much more portable because they encapsulate everything needed to run an application in a way that assures that it will always run the same way, no matter where it’s moved to, or what kind of environment is running it. How has that newfound power been used?
Streamline process for application development
Containers have enabled a revolution in how applications are developed and run. The idea of DevOps and CICD (Continuous Integration/Continuous Delivery) is made possible by the portability provided by containers and the assurance that they will run wherever they’re moved to. No more setting up environments, creating and defining VMs, and loading applications every time an application is moved to test or production. Containers automate many of the tasks formally born by developers and streamline the “Operations” part of DevOps to simply coordinate the activity.
Lower software costs for “born on the web” applications
Getting more code into production faster from fewer developers is just the start. Because containers are fully self-contained, they don’t have to be saddled with bloated operating systems, database engines, and virtualization software designed for anything and everything. Instead, a stripped-down kernel can be used in place of a full OS, shared libraries replace databases, and shared services replace countless utilities and middleware. What virtualization did to reduce hardware cost, containers do for infrastructure software.
Greater flexibility and optimization for legacy applications
So far we’ve addressed what containers can do for new applications – but what about existing applications? It would be a lot of effort to refactor the applications that already exist to replace OS, database, and other code with the shared services of microservices architecture. Taking the application as is (existing virtualization software, OS, database, and everything in place now), and putting it in a container will sometimes provide a fairly big gain for very little effort. This is especially true if you’re already using containers for new application development. Containers make it easy to relocate applications to accommodate workload changes and to optimize the use of different infrastructure choices – and infrastructure costs – that are available to you.
Growing popularity
All this helps to explain the popularity of containers. A recent report from the Cloud Foundry Foundation found that 53% of organizations have either deployed containers or were evaluating them. Nearly two-thirds of those organizations said they expect to “mainstream” container use over the next year.
As CIOs explore new ways for the cloud to power their digital business, many are looking to revamp outdated IT delivery methods – whether there are traditional IT teams focused on existing applications that are the systems of record or application development teams focused on systems of innovation. Most CIOs are looking to empower both parts of their organizations.
Containers facilitate DevOps
Organizations faced with a healthy amount of new application development gravitate toward a container environment that uses the DevOps model and microservices architecture. In fact, more than half of the IT leaders in IDG’s 2017 CIO Tech Poll say their organizations have already adopted DevOps practices or plan to do so.
- The underlying concepts of this emerging DevOps model are speed, agility, scale, and cost savings:
- Developers want a solution that makes it easier to deploy their software into production with fast development cycles
- The operations team wants a secure and scalable solution that can help them better use resources, is easy to set up and maintain, and improves the customer experience
- Both sides want to spend more time using the cloud and less time building it.
- Management wants to drive the business while reducing costs.
DevOps and containers are critical to a cloud-native model for rapidly developing, deploying, and improving apps to address a variety of business requirements. Increasingly, software is a competitive differentiator. A 2015 McKinsey study found that companies that excel at software development had 80% fewer residual design defects in their software output and a 70% shorter time to market for new apps and features than other companies.
“This performance gap means that top companies can speed up the flow of new products and applications at much lower cost and with markedly fewer glitches than other companies can,” the report’s authors noted.
Containers for legacy applications
Organizations with mostly legacy applications are generally looking at containers to add flexibility and improve cost optimization. Containers make it easier to move applications or to change the characteristics of a Software Defined Data Center (SDDC) without worrying if the change will disrupt the app. This removes the major barriers in responding to changes in peak demands, low or idle activity, or changes in service level requirements.
Is there always a payoff for using containers with legacy applications? The biggest gains occur when there is fluctuation and variability in workload or service level requirements. As with virtualization, there is little to gain if the application is highly predictable and already running at peak utilization.
Bumps in the road
With such a clear trend and compelling return, it should be easy to forge ahead by adopting what by now should be well-validated best practices. Unfortunately, most of the container environments were built through trial and error by the self-serving interests of each developer. That’s not necessarily the best path to follow for the enterprise. In fact, many of these highly successful early adopters have been redesigning and re-architecting their DevOps environments as their priorities shift from develop faster, easier, better to a balance of development priorities AND the ongoing operation priorities of reliability, availability, performance, and cost.
Early adopters also built up their skills and experience through their “science experiment” method of creating these environments. The use of containers in DevOps using microservices architecture is a complex set of choices for container workflow, container orchestration, scheduling, container engine, OS, virtual infrastructure and even physical infrastructure. All of these tools, products, and applications – not to mention the vendors that supply them – would be completely foreign to the highly experienced IT staff running the infrastructure for an SAP implementation.
Even though there are more and more developers building apps using containers, standardization and best practices still seem a long way off. Some developers assemble different microservices for specific requirements, others re-architect their container environment to try something new, while still others stick with a combination of container services that they are familiar with regardless of how well suited they are for the job at hand. Chaos often follows.
This lack of standardization and automation across microservices has in some cases increased complexity instead of reducing it. Container management is the No. 1 challenge of container usage among respondents to the Cloud Foundry survey. Despite the promise and potential of containers, it’s fair to say that there are a lot more organizations that would like to be using them than are currently doing so. Too many organizations just haven’t been able to overcome these obstacles to reach this advanced state of development expertise.
And finally there is the tragic turn when easy, cheap, and convenient turns into a costly lock-in. Public cloud is a fantastic resource in the container space – it’s hard to imagine that containers, microservices architecture and the CI/CD advantages of DevOps could ever be achieved without public cloud. But that doesn’t mean public cloud is always the best choice. There is a maturity point as application development progresses to production that increases the complex set of interdependencies. Private cloud, with its in-house physical infrastructure, can often lower costs and give you more direct control over your environment. This is hard to see initially. At first, all of the advantages point to public cloud. Costs multiply as service levels become more aggressive and the amount of data movement and data access increases. In these more mature production settings, those initial cost advantages can quickly fade.
Clearing the path for container adoption
Creating a platform for your developers and operations staff to achieve the most from containers does not have to be a science project. Nor do the advantages need to be isolated to one group or another. Done right, your container environment should make everyone in IT better at what they do, making it easier to achieve your business goals. Here are four core principles to keep in mind:
Agility
You want developers focusing on building great software, not getting bogged down creating a new microservices architecture for every project. To accelerate and streamline your DevOps activity, operational tasks, and the management of legacy applications, consider pre-designing and pre-engineering a set of standardized container environments.
Standardization streamlines process and removes the guesswork for what kind of container environment and microservices architecture should be in place for each project. And, if it’s pre-designed you already know it works. Make sure your set of standardized choices provides enough options for everything you’re likely to need. This way you won’t be caught with an architecture stack perfect for speeding development, but insufficient to deliver the service levels required later in production.
The complete stack
When establishing your set of standardized environments, make sure to put all of the pieces together. There are many choices at each layer of the stack – each with different strengths and weaknesses. Make sure you’re choosing each layer so that it matches your requirements and situation best – ensuring that they work together well of course. And don’t just string together container workflow, orchestration, scheduling and a container engine and call it a day. Make sure you extend your stack all the way to include the OS, virtualization, and physical infrastructure.
Many of these choices will be different for DevOps than what you use for legacy applications. In DevOps you’ll have OS choices like Ubuntu, Rancher, and Photon. The virtual infrastructure might be cloud-based, like Google, Azure, or AWS when your physical infrastructure is public cloud; or more traditional virtualization like VMware, Hyper-V, or KVM for private cloud.
Ease of use
Having everything set up in advance with all of the potential choices is a big head start to making your container environment easy to use. Take the extra step to provide self-service access to those pre-designed choices. Investing the energy up front to integrate and automate throughout the stack will exponentially increase your payoff and help to accelerate adoption. For example, converged infrastructure and orchestration of enterprise cloud containers makes it easier for developers to do their work by reducing the number of deployment steps, simplifying scheduling logic, and decoupling application dependencies. That goes even further when those capabilities are combined with container management and orchestration.
Flexibility and efficiency
Start your container initiative with a plan focused on the business outcomes you are trying to achieve. There’s a lot of cool new technology that may create too much temptation that makes it harder to remain focused on your business results. Build in the options for both public and private cloud so you don’t get trapped into higher cost than planned. Pre-design what you need, not what you want. Explore ways to integrate computing, storage, and network resources. Automate provisioning tasks to improve your ability to manage container-based applications. Create guidelines for choices and automate any of the steps that don’t involve decision points. And finally, honor the different goals, activities and process between DevOps and legacy applications.
An easy-to-use and flexible container platform that is standardized and automated will help you accelerate the investment payoff in your DevOps model and help reduce costs for your legacy applications. Done right, containers have the potential to provide the greatest increase in agility and flexibility you need to successfully drive your digital transformation.
What to do right now
A measured approach to any technology deployment is wise, and containers are no exception. Do some initial experimenting with containers outside of a production environment. This will give developers a safe environment to replicate real-world use and identify the problems that will inevitably crop up. Gain experience both in using containers to develop and deploy cloud-native apps as well as using them to make your legacy applications more portable.
Don’t make it a science experiment. Your goal is to gain the knowledge you need to get the most from the experts you’ll want to engage to design your container platform.
Date:June 26, 2017