Pulling Docker containers from Docker Hub doesn't require any special handling or credentials, making it quite simple to consume Docker containers in a Jenkins Pipeline. In this blog post however, I'll describe a simple pattern which I have been using to programmatically publish Docker containers to Docker Hub from a Jenkins Pipeline.
To ignore Node.js as a possibility in certain problem domains, for which it is
the best tool for the job, is a tremendously silly and at times unprofessional
helpful addition, for me at least, are the
await keywords which
I have been thinking a lot about customer support over the past two years. My role as "Director of Evangelism" has placed me at the leading edge of what could be referred to as "customer success" or "user education." What I have come to appreciate, especially in Enterprise-focused startup companies, is the connected and complimentary roles between Product, Engineering, Quality, Evangelism, Customer Support, and Sales. In an Enterprise-focused organization what defines the success for each of these groups is fundamentally the same, but they are not all equally "connected" to the customer's feedback and concerns.
At my previous company one frequent request made by developers was along the lines of "I want to be able to run a development stack on my machine." Frankly, I never understood this desire, and still don't. While I would agree that my laptop is underpowered, running a stack of JVMs and other applications, in addition to a web browser, would bring most machines to a crawl. An ideal alternative, is to simply operate a personal Kubernetes environment in a public cloud. Fortunately, that is now a genuinely simple task.
One foggy morning a few weeks ago, I received a disk usage alert courtesy of
the Jenkins project's infrastructure on-call rotation. In every infrastructure
ever, disk usage alerts seem to be the most common alert to crop up, something
somewhere is not properly cleaning up after itself. This time, the alert was
from our own Jenkins environment. The logging
filesystem wasn't the problem, the filesystem hosting
perilously close to running out of space. The local time, about 6:20 in the
morning, and yours truly was quietly furious at the back of a bus headed into
San Francisco for the day.
One of the first pain points many organizations endure when scaling Jenkins is
the rapid accumulation of artifacts on their master's filesystem. Artifacts
are typically built packages such as
files, which are useful to persist after a Pipeline Run has completed for later
review as necessary. The problem that manifests over time, is quite
predictable, archived artifacts incur significant disk usage on the master's
filesystem and the network traffic necessary to store and serve the artifacts
becomes a non-trivial problem for the availability of the Jenkins master.
In this post I would like to share a handy little workaround for returning to Google Hangouts, despite Google Meet. Having narrowly escaped working at Google via acquisitions twice, I have stood by and watched as the Ad Words money-pipe funded rewrite after boondoggle after rewrite. When Google announced "Google Meet" earlier this year as an "enterprise-friendly version" of Google Hangouts, I was annoyed, but not surprised.
After learning how to build my first terrible website, in ye olden days, perhaps the second useful thing I ever really learned was to run multiple websites on a single server using Apache VirtualHosts. The novelty of being able to run more than one application on a server was among the earliest things I recall being excited about. Fast forward to the very different deployment environments we have available today, and I find myself excited about about the same basic kinds of things. Today I thought I would share how one can implement a concept similar to Apache's VirtualHosts across Namespaces in Kubernetes.
This research was funded by CloudBees as part of my work in the CTO's Office with the vague guideline of "ask interesting questions and then answer them." It does not represent any specific product direction by CloudBees and was performed with Jenkins, rather than CloudBees products, and Kubernetes 1.8.1 on Azure.
Months ago Microsoft announced Azure Container Instances (ACI), which allow for rapidly provisioning containers "in the cloud." When they were first announced, I played around with them for a bit, before realizing that the pricing for running a container "full-time" was almost 3x what it would cost to deploy that container on an equitable Standard A0 virtual machine. Since then however, Azure has added support for a "Never" restart policy, which opens the door for using Azure Container Instances for arbitrary task execution.