This is the beginning of a short series of articles on various technologies and design/architecture patterns we’re using to rebuild the administrative infrastructure at Pagely.
We’ll get to the meat (or soy-based substitute for the vegans out there) of the article momentarily. First, I want to introduce myself since this is my first “official” article.
Hi, I’m Gordon Forsythe and I’m a software developer. I’ve been a professional programmer for somewhere around 20 years. In the various jobs I’ve had, I’ve worn many hats and been involved in many industries including healthcare practice management & insurance, real estate, education and some things I’d prefer my family never hear about. I also run (with a little help from my friends) the azPHP developer group. Like any good developer, I feel like I never know enough and continue to learn new things. With those years of experience comes wisdom. One of the most important bits I can pass on is summarized by a quote passed on by a friend:
A good rephrasing of this (and possibly the inspiration for the above one) is:
“Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it.” — Alan Perlis
It was this mindset, along with my “get shit done” work ethic which probably got my former co-worker from long ago and current CTO of Pagely, Joshua Eichorn, to ask me–multiple times–to come work with them…so I did.
What Am I Doing At Pagely?
I started at Pagely with the primary goal of rebuilding the administrative infrastructure (internal and client-facing) from scratch. When I say administrative infrastructure, I’m not talking about the WordPress site deployment architecture, I’m talking about the user interfaces and data required to render said interfaces. These maintain and administrate our infrastructure and are used for our customers to administrate and maintain their hosted sites and features with us. To most of our customers, you know this simply as “Atomic”.
This brings us to today and the subject of this article. We are currently working on the underlying architecture of the new system and wish to share what we’re working on and troubles we run into. Here’s a list of the high-level bits that we’re going to be using for the new system and what will probably be the topics of the articles to come:
In order to have a maintainable infrastructure and allow for more rapid release cycles, we are going to implement a small service for each underlying “feature” of the platform. For example, there will be an API for authentication, one for billing, one for site deployment, one for account configuration, one for log analytics … you get the picture. Previously, most, if not all of these, would be housed in a single application. Each microservice contains its own set of tests, so running them only takes a fraction of the time it would normally be in a full-featured API.
Previously, the interface of all of the administration tools used the standard old-school method of rendering everything on the server and spitting out an entire HTML page every time you click on something. Doing this is incredibly inefficient as every time this happens database or cache queries must be done to re-render the entire page; everything from login info, status notifications, to re-rendering all of the links in the interface based on the user’s role or security level.
In a standard webpage, this is the norm, as it needs to be scannable by search engines so the content can be easily found by others. Since we’re talking about an application here (a set of applications, really) that requirement does not apply since you have to be logged in to even get to the interface.
The actual deployment of the individual APIs and front ends will be handled using containers. Containers have been recently popularized by the Docker platform, but there are many methods for creating and handling containers.
Containers, to summarize, are a way to consistently deploy a process without requiring an entire virtualized OS. For example, the previous development process for our new architecture would require me to spin up an entire virtual machine for every single microservice, databases and front end, each with their own CPU and memory requirements. Each VM would also have its own dedicated set of filesystems, duplicating quite a bit of data and slowing down my development box to a crawl. Utilizing containers, each service still runs in its own separate environment, but much of the underlying system is shared as long as they run on the same base “image.”
Once a container is built and is running properly with the right code, it can be “published” to a container repository, where it is stored as a simple file. We can then deploy that container multiple times and each one will be exactly the same. The previous method of building up a new virtual machine or a new VPS each time we need to scale or update means each one would not necessarily be exactly the same, therefore creating inconsistency and, possibly, bugs.
Oh, the logs! Where to find logs for the API? No, what about the web server logs? Perhaps a database query failed? Where do I find that? Was that request even logged? How the heck do I run stats on all these files?
If that sounds familiar, you may want to look into a way to centralize your logging. Centralized logging is exactly what it sounds like–putting all the logs in one place. There are quite a few services (both commercial and self-hosted) out there which allow you to do this. Most use a collector to stream your logs to a central database. This allows you to easily filter logs or simply watch them stream by. Doing this, we can easily find and correlate issues between different microservices and/or features. We can also just ensure things are running as they should, without having to have ten “tail” terminals running. Many systems also allow you to split and send specific data to metrics databases so you can easily see application throughput, error rates, query times, etc.
Our new code hooked into a continuous integration system making it easier to test it. Each time we push or merge to our code repo, our CI system will do a fresh checkout of the code. It then builds a pre-configured container for it and runs the unit and integration tests on all the things. If all tests pass and the code was on one of the main branches specified for certain environments, then that code will be packaged into a new container and deployed automatically.
There are still many smaller pieces of the puzzle which are up in the air other than the ones mentioned here. We’ll collapse the waveform on those as we get to them. In order to test out this new architecture, we are adding a few new features to the existing stack and starting with those being the topic of the microservices and interfaces to build. That interface will turn into the full-featured internal admin UI once everything is built out. Eventually, we will have the new and old systems running in parallel until we deprecate the old one.
Notice that I haven’t given any specifics on which products we are using. Yes, I could give some of them now, but you’ll have to wait for the detailed articles to come.
See you next time!