IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Deep Dive: What to Know About Containers

The California Department of Consumer Affairs is part of the vanguard of state government departments using containers. At Techwire's request, DCA wrote this "deep dive" into container technology — what it is, how it can speed efficiency in government as well as the private sector, and how it might even help bridge the gap between Dev and Ops.

The California Department of Consumer Affairs is part of the vanguard of state government departments using containers. At Techwire's request, DCA wrote this "deep dive" into container technology — what it is, how it can speed efficiency in government as well as the private sector, and how it might even help bridge the gap between Dev and Ops.

This explainer was written by Val Marth, section chief for infrastructure, engineering and administration of DCA's information services department, with a contribution from Jason Piccione, the department's deputy director for information services and CIO. 

 

Containers are the next step in the evolution of system efficiency that promise to squeeze almost every drop of computing power from servers into applications while maintaining security and performance. 

Containers simplify configuration through consolidation of services into common hosts, where each container gets its own isolated share of server resources allocated in accordance with the Open Container Initiative (OCI).  

Code pipeline management, application debugging and dynamic scaling are easier with containers because of their immutable nature: They are self-contained images that can be pulled from a registry and passed as environment-specific arguments at runtime. In fact, the most difficult part of switching to containers is adopting the DevOps culture that puts developers and operations staff on the same playground, removing the traditional silos between them.

Rather than configuring several individual servers and networks to create a new system, containers allow the system to be laid out on a single host.  Each container acts as a traditional application, Web or database server with a stripped-down OS, using far less resources and allowing many containers to be run on a monolithic server. Private networks between containers improve system security and performance by reducing the attack surface and substituting the host backplane speed for network interconnect bandwidth. 

The most important thing to remember here is that containers are disposable, so the data they create must be saved outside the container using remote filesystem strategies (where “remote”  means “outside the container”). When the container is stopped and removed, its replacement can start and connect to the remote filesystems without affecting data integrity. This approach allows for patching and upgrades in the few seconds it takes to stop, delete the old container and start the new.

Developing containers fits perfectly into Continuous Integration/Continuous Development (CI/CD) pipelines where popular tools like GIT, Bugzilla, Artifactory, Selenium, SonarQube and Jenkins are used by developers and operations staff to make tangible improvements in the customer experience every day. These improvements are tested and deployed as soon as the changes are checked in, dynamically moving through development and system test environments as successive batteries of automated tests are passed and eventually landing in User Acceptance Test (UAT), where an email beckons for eyes-on-glass before the deployment can be queued for production.  

This rapid movement through the environments is simplified by the ability to pass environment variables to the container as it is started. Improved operational efficiencies and environment agnostic images allow for blue-green deployments because the hardware can run more instances of the application/service in containers as compared to virtual machines. All of this leads to shorter development cycles and faster time to market for those urgent fixes or popular features.

Docker and Rocket are the two most widely adopted container platforms, and most of the public cloud services offer tools to work with their container images. Likewise, there are several other technologies that are vital to managing pools of containers and orchestrating deployments, including F5 load balancing and OpenShift (RedHat’s super-Kubernetes). Using F5 load-balancing in front of OpenShift allows for container elasticity to spin-up more pods on demand and spin-down when slack, or accomplish production database maintenance while the application is live in production by switching the database connection to DR for the public-facing containers while the production database maintenance is underway.  

Furthermore, we are no longer bound by the literal and figurative walls of the data center: Containers extend the boundaries of the data center into the cloud. Most of us have chosen to put some of our eggs in our local data center basket, with others in private or public cloud baskets. Containers allow us to move our running applications and services from basket to basket without the customer noticing. In fact, moving between data centers and clouds can be automated for a variety of reasons, including disaster recovery, increased or decreased resource demands and per-cycle compute costs.

That new paradigm sets softer boundaries between staff roles and encourages a level of cooperation and collaboration between software and hardware engineers that was unimaginable just a decade ago.  

Maybe the time to break down these silos has come because as public-sector resources stay static or decrease, the organization is asked to produce exponentially more. We need to accommodate an enterprise perspective to deliver enterprise solutions with an eye on the holistic rather than the myopic view of a single vertical.  

Developers will get elevated privileges on the infrastructure in some cases, and operations staff will need to develop and check in some code, like base OS containers for RedHat, for example. There will be some awkward conversations in both shops as managers are asked to approve root access for developers or allow infrastructure engineers to check in code to the project repository.  

Of course, security remains paramount in order to remain in business, and there are many tools that progressively tighten the security screws to the extent that no developers have elevated privileges for anything in production and all the attack surfaces have been analyzed and threats mitigated using tools like Clair and Anchor. Good security practices by the operations staff are timeless and should remain in their toolbelts.

Secure, high-performance containers have taken system efficiency to the next level. Where we used to struggle to provide sufficient hardware to run our resource-hungry applications in multiple environments while maintaining a viable disaster recovery contingent, the miniscule resource requirements of containers leaves us with enough to have complete duplication in production.  That means we can do blue-green deployments without an outage or reserved maintenance window. Systems don’t sprawl across many servers crossing many firewalls, and developers get their fixes and new features in production faster. We dump our floods into clouds now, and our websites beam secure pages to our customers regardless of what might be happening at any given data center.  

Best of all, our teams are closer and more productive than ever.