For a variety of reasons WordPress sites are targeted by hackers. This makes addressing security vulnerabilities a critical aspect of managing a WordPress website. WordPress attacks are not an isolated issue that only affects popular sites — you can’t avoid them simply by being a small, relatively unknown website. If your site runs on WordPress, it is safe to assume it will be targeted at some point. A compromised site could be an inconvenience or it could be business-ending if it leads to a customer data leak. To protect your site from being compromised, you should install (and correctly configure!) WordPress security plugins to cover the vulnerabilities your host doesn’t handle for you. If you’re not using a good Managed WordPress Host (like us), chances are good that your site needs a couple of extra security plugins to add WordPress-specific security defenses. Thankfully, there are a wide variety of excellent WordPress plugins available to help with everything from authentication to zipping and sending automated backups into the cloud. In this post, we help you choose the best WordPress security solutions for your website by listing some of the most popular plugins. We provide a concise summary and categorize them by their primary function. You should combine plugins to create the types of protection you need. To know what protection your site needs, ask your current host whether they provide: General defensive hardening for WordPress sites Daily, automated backups held for at least two weeks Firewall protection Two-factor authentication (2FA) Malware scanning The plugins listed here are divided into categories so you can easily jump to the ones that interest you: All-in-one security plugins Malware scanning plugins Firewall plugins Two-factor authentication plugins Backup plugin solutions Miscellaneous security plugins The plugins aren’t listed in any particular order (they aren’t ranked) but to make this list they had to: Have been updated recently (last couple of months) Have a good user rating (4 stars and up) in the WordPress directory Have a decent number of installations (100+) All in one security plugins All-in-one WordPress security solutions try to protect you from all security threats with general hardening of your site’s defenses by patching common security vulnerabilities, file protection, brute force protection, firewall, etc. If you just want the easiest, fastest solution for protecting your site, an all-in-one solution probably makes the most sense. It is important to know that you’ll have to go through the configuration process for the plugin to be effective. You can’t just activate the plugin. All the plugins listed here have thorough documentation to guide you through the setup process. iThemes Security iThemes protects your site in more than 30 ways. Their free version of the plugin checks many of the important security boxes like hiding your login URLs, file change detection, forced SSL, securing wp-admin, and more. The Pro version starts at $80 and adds Google reCAPTCHA integration, two-factor authentication, core file comparison, and other advanced security features for top-notch site protection. WordPress directory rating: 4.7 out of 5 Free version available: Yes Premium version starts at: $80 Website | WordPress directory | Documentation WordFence WordFence premium offers some interesting features including real-time IP blacklist (as other sites using WordFence are targeted all other sites using WordFence block that IP address), real-time website firewall rule updates, and malware signature updates. In addition, they offer an endpoint firewall that may offer more protection than the cloud firewalls offered by many solutions because the traffic to and from your site remains encrypted throughout the process. WordPress directory rating: 4.8 out of 5 Free version available: Yes Premium version starts at: $99 Website | WordPress directory | Documentation All In One WP Security & Firewall This solution is great for someone with limited technical knowledge because it divides different security features into basic, intermediate, and advanced categories then gives you an overall grade on how well-protected your site is. This plugin is unique because it is a completely free WordPress security plugin — it does not offer a paid version. WebNots has a great configuration and set up tutorial should you choose to go this route. WordPress directory rating: 4.8 out of 5 Free version available: Yes Premium version starts at: n/a WordPress directory | Documentation BulletProof Security BulletProof offers a free version and a Pro version that includes additional security features for a one-time fee and lifetime updates (no recurring monthly or yearly charges). The Pro version offers important features like auto-restore, upload anti-exploit guard, php.ini security protection, and more. WordPress directory rating: 4.8 out of 5 Free version available: Yes Premium version starts at: $69.95 Website | WordPress directory | Documentation Astra Web Security Astra’s core features are a Web Application Firewall (WAF), malware removal, file upload scanning, and the general security hardening features you’d expect an all-in-one solution to have. If you’re an agency, Astra offers tools that make it easy to monitor and manage the security of the sites you’re responsible for. A free version of the plugin is not available to install and try on your site but monthly plans start at $9 a month. WordPress directory rating: n/a Free version available: No Premium version starts at: $9/month Website | Documentation Security Ninja Security Ninja offers a one-click, 50 point scan of your website. You can test the plugin for free but to unlock all its features you’ll need the paid version which starts at $29 for a year of updates and support. The Pro version includes a firewall, malware scanner, auto fixer, core scanner, and other tools you’d expect a comprehensive WordPress security solution to include. WordPress directory rating: 4.3 out of 5 Free version available: Yes Premium version starts at: $29/year Website | WordPress directory | Docs Jetpack Jetpack is the swiss army knife of the WordPress world. One of the tools on that multifunction knife is security. If you’re already a Jetpack user, it makes sense to see if it offers the level of protection you need before adding yet another plugin to your WordPress app. Jetpack, being built and maintained by Automattic lends it authority but, it is less robust than some of the other all-in-ones listed here in terms of the number of security enhancements it provides. Jetpack includes: Downtime Monitoring, Plugin Updates, Secure Sign-On on the free version and Security Scanning, Backups, and Spam Protection on the paid version. The $9 a month paid version includes all security features as well as the many other WordPress enhancements Jetpack offers. WordPress directory rating: 3.9 out of 5 Free version available: Yes Premium version starts at: $9/month Website | WordPress directory | Docs Malware scanner plugins A malware scanner protects your site from malicious code by checking your files for known malware and suspicious code. While many all-in-one plugins include malware scanners, you may want a standalone malware scanner if, for instance, you’ve taken care of general WordPress security and hardening of your defenses. MalCare A malicious code scanner that also helps you clean up any infected files. Malcare states that their scanner will not slow your website down because there is no load placed on server resources. Similar to WordFence, Malcare leverages its network of websites to create a smart firewall that is updated as new threats are identified. Malcare advertises that their scanner will not slow down your site. One year of Malcare protection starts at $99 for one site. WordPress directory rating: 4.5 out of 5 Free version available: Yes Premium version starts at: $99 Website | WordPress directory | Docs Cerber Security, Antispam & Malware Scan Cerber protects your site with a malware scanner, integrity checker, and file monitor to continuously check your site’s files for signs of a malicious code infection. This plugin also includes brute force protection, various anti-spam protections, and logs suspicious activity making it close to an all-in-one security solution. WordPress directory rating: 4.9 out of 5 Free version available: Yes Premium version starts at: $29/quarter Website | WordPress directory | Docs WordPress firewall plugins WordPress firewalls are web application firewalls (WAF) designed specifically for protecting WordPress by monitoring and controlling incoming and outgoing traffic. Basically, it is a barrier that protects your WordPress website from potentially malicious traffic. When your firewall detects malicious traffic, it drops the connection. BBQ: Block Bad Queries BBQ claims to be the fastest WordPress firewall plugin available. It is fully customizable but needs zero configuration to launch. The paid version is powered by the 5G/6G blacklist, offers IP address whitelisting, and advanced configuration and customization options. WordPress directory rating: 5 out of 5 Free version available: Yes Premium version starts at: $20 Website | WordPress directory | Docs Sucuri This cloud-based firewall uses application profiling, signatures and heuristics, and a correlation engine to protect your site from unwanted traffic leading to enhanced security. Sucuri’s paid plans start at $199/year and offer additional protection and services including: malware and hack clean up, blacklist monitoring, virtual patching, and a CDN for faster performance. WordPress directory rating: 4.4 out of 5 Free version available: Yes Premium version starts at: $199 Website | WordPress directory | Docs Two-factor authentication (2FA) plugins 2FA plugins protect WordPress from unauthorized access by adding another layer of security to the login process. Rather than simply entering your username and password, you’re asked to enter a code sent to your email, phone, or an authentication app like Authenticator or Authy. Google Authenticator 2FA plugin The free version of this plugin offers 2FA authentication for a single user. If you need protection for additional users, the cost is $5 for 2 users, $20 for up to 5 users, and $30 for up to 50 users. In spite of the name, this plugin is actually compatible with many 2FA solutions, not just Google Authenticator, including Authy, LastPass Authenticator, QR Code, Push Notification, Soft Token, and Security Questions (KBA). WordPress directory rating: 4.5 out of 5 Free version available: Yes Premium version starts at: $5 Website | WordPress directory | Docs UNLOQ UNLOQ specializes in two-factor authentication. They realized that installing, configuring, and managing a 2FA solution can be daunting and overwhelming to many users so they intentionally created a 2FA solution that is easy to set up and manage. Authentication can be sent by push notification, time-based one-time password (provided by the UNLOQ mobile app), and by email. UNLOQ is free for up to 100 users. WordPress directory rating: 4.4 out of 5 Free version available: Yes Premium version starts at: Free up to 100 users then $19/month for 101 – 200 users Website | WordPress directory | Docs Back up solutions Having frequent, scheduled backups is an important part of risk mitigation. In the event of a security issue you’ll need an uncompromised version of your website to restore to. While these plugins make the process of creating and managing backups simple, Pagely customers do not need a backup solution because we handle that for you as part of our managed WordPress hosting solution. UpDraftPlus UpDraftPlus is a popular and highly-regarded WordPress backup solution that can handle full and incremental backups of files and databases, automated backups prior to updates, and migrations. It is highly configurable allowing you to send your backups to a number of remote locations including Google Drive, Dropbox, AWS, and more. The premium version gives you access to a variety of useful add-ons and is $70 for two sites ($45/year after the initial $70 fee). WordPress directory rating: 4.8 out of 5 Free version available: Yes Premium version starts at: $70/first year, $45/year after Website | WordPress directory | Docs Backup Buddy Backup Buddy offers complete site backup, restore, and migration. They include advanced features like database back up and rollback. The easy roll back feature is convenient if you ever make a small mistake and don’t want to go through the hassle of restoring from a complete backup. The premium version of BackUp Buddy is $80 for 1 site. There is no free or trial version available in the WordPress directory but this is a highly regarded solution worth evaluating. WordPress directory rating: n/a Free version available: No Premium version starts at: $80 Website | Docs VaultPress (part of JetPack) JetPack was already mentioned as an all-in-one security solution but it also includes options for backing up your website. Like its security features, these features are straightforward and may be enough for many users but it lacks the advanced options and depth more specialized backup plugins offer. WordPress directory rating: 3.9 out of 5 Free version available: Yes Premium version starts at: $9/month Website | WordPress directory | Docs Miscellaneous security plugins These are highly specialized plugins that offer unique security benefits but don’t fit neatly into other security categories. Simple History Simple history logs events that happen in WordPress including changes to pages and posts, uploads, plugins installed or modified, comments, logins and failed attempts, and data exports and data erasure requests. Having a detailed log of this type of activity can help you untangle what happened in the event there’s unauthorized access to one of your WordPress accounts. Once you have a rough idea of what was modified and when, you can rewind the clock to undo those changes by restoring from a backup. WordPress directory rating: n/a Free version available: No Premium version starts at: n/a (no premium version offered) Website | WordPress directory Fail2Ban Fail2Ban offers protection from a very specific type of attack: brute force. A brute force attack is when combinations of usernames and passwords are tried, one after the other, until the right combination is found. Once authenticated the attacker has whatever privileges and access that account provides. While using a strong password and avoiding the “admin” username will help prevent a successful brute force attack, another defense is to block access to bots and humans who may be trying combinations maliciously. That’s exactly what the Fail2Ban plugin does and it does it very well. With 4.9 stars, it’s one of the highest-rated security plugins covered here. WordPress directory rating: 4.8 out of 5 Free version available: Yes Premium version starts at: $99 WordPress directory | Docs Conclusion WordPress website security is a critical responsibility for anyone managing a WordPress site. It is not something that can be ignored or put off. You have to address it or it’s only a matter of time till your site is hacked or somehow compromised. If managing your own site’s security defenses is daunting or intimidating to you, you may want to seek out a Managed WordPress Hosting provider such as Pagely to handle security for you. We’ve specialized in WordPress hosting for over a decade and have a security team dedicated to proactively protecting your WordPress site from malicious activity.
For quite a few years AWS had little competition in the cloud provider space. Today, that’s no longer true. Oracle, Microsoft, Alibaba, Google, and other heavy-hitters have entered the market and each of their cloud solutions has unique strengths and weaknesses. The competition between cloud providers is great for end users. Ultimately we get better technology at better prices. But, if you’re comparing cloud providers today, you’ll probably find it challenging to make a choice on which solution is best for your needs. As competition increases, each provider strives to match the strengths of competing solutions and overcome their weaknesses. This means there’s an ever-narrowing gap between what makes cloud solution’s unique and this can make it increasingly difficult to identify the cloud that’s the best match for your project. In this post we’ll do an in-depth comparison of two popular public cloud solutions: Amazon Web Services (AWS) and Google Cloud Platform (GCP). I’ve tried to be fair and unbiased on the strengths and weaknesses of each solution but you should know upfront that here at Pagely we are an AWS Advanced tier partner. Both our managed WordPress hosting solution and our serverless solution are built on AWS. It’s the cloud we trust and I hope to clearly explain why in this post while still remaining fair to GCP. Comparing AWS vs Google Cloud in 2019 Just a few years ago, you could say that AWS was bigger and far more feature-rich but Google Cloud Platform was priced lower and the services they did offer were solid enough so, if they met your needs, they were a strong contender. It was simple and straightforward to summarize how things were at that point in time. Today, halfway through 2019, the lines are less clear so it’s no longer easy to boil the differences down to a single sentence. So, for you, the person doing research on public cloud options, this rapidly developing competitive landscape presents another challenge. A comparison article from 3, 2, or even 1 year ago no longer captures the current situation. Things are changing that fast. We’ll try to keep this article up-to-date as both clouds advance. AWS overview Jeff Bezos said it well, “AWS had the unusual advantage of a seven-year head start before facing like-minded competition. As a result, the AWS services are by far the most evolved and most functionality-rich.” With that generous head start AWS has built an incredibly well-rounded solution while establishing a strong reputation for performance, reliability, and security. AWS has been, and continues to be, the bar all other clouds are measured against. The fact that AWS was the first to market, alone, does not necessarily make them a better solution. But, it does mean that they’ve had more time to develop and refine their cloud offerings. With something as rapidly evolving as cloud technology, this is a tremendous advantage. Their platform is more mature. They offer far more services (200 + versus ~ 50 for GCP). Those services have more options and offer more functionality. Everyone else is just trying to catch up at this point. In general, you can’t go wrong with AWS. If you’re a startup pitching potential investors and you tell them your solution is built on AWS, no one blinks an eye. It’s a safe bet and a time-tested, proven solution. But that doesn’t mean it’s always the best option for every situation and in this post I’ll point out specific situations where GCP may be the better option. Google Cloud Platform overview If anyone can compete with an entrenched market leader like AWS, Google is in a good position. They have a strong brand, global infrastructure, and many users of their office suite of tools (Gmail, Drive, Docs, Sheets, etc.). Their emphasis on affordability (they were the first cloud solution to provide to-the-minute billing and they offer increasing discounts as you use more resources), security, and performance have allowed them to continue growing even as AWS’s marketshare has also increased year over year. While some companies may be hesitant to trust their sensitive data to Google given its privacy issues, Google explicitly states your Google Cloud Platform data is not used for advertising purposes and there are no backdoors for government agencies. Current market share SOURCE Market share trends don’t say much about which cloud solution is best but they indicate changes in consumer choices that reflect a shifting competitive landscape caused by stronger product offerings or pricing strategies. It can be useful to see where things are at and try to understand why. In this case, we see AWS is the 800 pound gorilla every new competitor is gunning for. AWS’s marketshare is greater than the next three competitors combined and it’s almost 7x Google’s. In spite of the increasing competition, AWS’s marketshare continues to grow as the top four cloud solutions take marketshare from the multiple smaller providers that make up the “Other” category. In the years ahead as the “Other” category continues to grow smaller, we’ll probably start to see more direct competition between the major players. It will be interesting to see how that plays out as things heat up. Point-by-point comparison The core components of a cloud solution are compute and storage resources so we’ll cover those options from both providers and then compare the important higher-level points of comparison like network coverage, security, and pricing. Compute and storage options Virtual machine instances: EC2 versus Compute Engine What AWS calls “EC2 instances,” Google calls “Compute Engine Virtual Machine instances.” To keep things simple, I’ll refer to both virtual machine offerings simply as “instances.” Google Cloud Compute Engine currently offers 19 instance configurations. AWS offers 60 instance configurations. The wide range of instance configurations made available by AWS means companies will be more likely to find an instance configuration that matches their project’s exact needs. Google’s Cloud Compute does offer a variety of customization options for their instances so boiling the comparison down to the standard configurations is misleading. More than likely, on either platform, you’ll be able to find an instance that matches your project’s needs so, while this is a popular point of comparison, it probably isn’t a notable difference for most situations. Storage/Disk Storage options are similarly priced and offer relatively similar options at this point. It can be helpful to divide these services as hot storage, cool storage, and cold storage. Hot storage — Durable, available, and performance-focused storage for frequently accessed data. Amazon S3 Standard Google Cloud Storage standard Cool storage — Storage for data that is infrequently used but requires fast access when needed Amazon S3 Standard I/A and S3 Standard Z-I/A Google Cloud Storage Nearline Cold storage — Secure, durable, and low-cost storage for long-term archival for infrequently used data Amazon Glacier and Amazon Glacier Deep Archive Google Cloud Storage Coldline While minute differences between these storage options exist, they’re unlikely to be a deciding factor in which cloud solution is best so I won’t waste time picking those minor differences apart. Network Regions Google Cloud Platform’s network: AWS’s network: If those two maps look remarkably similar to you, you’re not alone. AWS does maintain an edge but that edge is becoming smaller as Google aggressively tries to match AWS’s coverage: A region is a specific geographic location where you can host resources. On their websites, AWS has coverage in 21 regions versus Google’s 20. Regions are divided into zones. Most regions have 3 or more zones. AWS has 66 zones and Google has 61 zones with 12 more coming soon. It’s safe to assume that in the not too distant future, we’ll see all major providers with basically the same global coverage and the differences will become negligible to the point of being moot. For now, AWS wins on this point of comparison. No other public cloud provider has the global coverage they do. Latency Google Cloud Platform has been touted as the fastest cloud provider. While initially they did seem to have a notable performance advantage over AWS, that gap is closing quickly as AWS optimizes their network. This is a great example of how increasing competition is forcing AWS to stay on their game and address specific concerns of the market. In recent latency tests on EC2 instances and Cloud Compute instances I conducted from a public internet provider on the southeast coast of the US using Cloud Harmony: Test 1: July 24th 2019: AWS’s lowest latency: 76.5 ms (us-east-2a) GCP lowest latency: 68.5 ms (us-east-4a) AWS’s highest latency: 492 ms (ap-southeast-2) GCP highest latency: 328.5 ms (asia-south1-a) Test 2: July 25th 2019: AWS’s lowest latency: 75.5 ms (us-east-2b) GCP lowest latency: 67.5 ms (us-east-4a) AWS’s highest latency: 505 ms (ap-southeast-2) GCP highest latency: 338.5 ms (asia-south1-a) These tests indicate Google Cloud Platform still has a modest latency performance advantage over AWS. Particularly when it comes to services located in Asia, latency is generally lower when pinging GCP. However, the real world performance difference is negligible. It’s true that every second counts, but when you’re measuring the difference in milliseconds it counts much less. The general rule of thumb is that 100 ms of network latency creates a noticeable effect. The performance difference between most zones is much, much smaller than that. As AWS continues to tweak and refine their network, we’ll continue seeing a smaller and smaller performance gap here. Until then, GCP wins in the latency category. Uptime and downtime Sum of downtime hours from January 2018 to June 2019 as published by AWS and Google: This very recent data indicates that the AWS cloud is more reliable than GCP’s. This is straightforward data so there’s not much commentary needed. But, for most businesses, the importance of having reliable access to their data cannot be overstated. AWS is a clear winner in this category and this is a major reason why they’re trusted by startups, enterprise businesses, and governments alike. Security AWS has more security features, compliance certifications, and accreditations than any other cloud provider, including Google Cloud Platform. In addition, AWS gives fine-grained security control that’s not possible on any other network. An AWS customer could restrict a user to create a database in only a specific region, from 3-5 pm on Monday through Friday, only on a virtual private cloud, and only in an M4 instances with a maximum number of IOPs. GCP simply cannot offer that granular level of security control and permissions. Both cloud solutions are building security capabilities for their most stringent and demanding customers and all customers benefit from those advances. While AWS does have an edge here, it’d be wrong to imply that the Google Cloud is not also highly secure with similar security credentials. Unless your company has extraordinary security needs, either cloud provider should have adequate security offerings and processes. Pricing Many comparison articles attempt to compare pricing on similar services between AWS and GCP. I’m going to decline to offer such a comparison. Cloud pricing has become too complex, depends on too many factors that are unique to each situation, and each solution offers unique discounting schemes. These factors together make a direct, accurate, apples-to-apples comparison extraordinarily challenging, if not impossible. In the end, it’d probably be misleading and useless. GCP has traditionally been considered to be the more affordable option. You’ll see this repeated in many older comparisons so I want to be careful to point out that this isn’t as true in 2019 as it has been in years past. AWS has recently become far more aggressive with their pricing and discounting so this advantage applies much less today than it has in years past. I recommend starting a conversation with both providers to see what they’re willing to offer in terms of discounts. This will give you a better idea of your true costs with each provider. AWS vs GCP: Seeing the big picture While there are areas where GCP is doing better than AWS, you’d be hard-pressed to make the case that GCP is better than AWS overall. Where GCP is outperforming AWS, those areas are usually minor and AWS has either made efforts to close those gaps or plans to close those gaps in the near future. When you zoom out and look at the big picture, AWS is clearly the better cloud solution at this point in time. They offer far more services with far more features and their network covers more of the globe. They’re more reliable, more secure, and more established. They’re winning in almost every way you can measure a cloud provider and that makes sense considering they had a multi-year headstart. With the differences between AWS and GCP becoming smaller with time, it’s important that you know exactly what your company needs out of its cloud provider for its project. While AWS is the best overall solution for most companies most of the time, if Google Cloud has a strength that matters for your particular project it may make sense to go that route.
Not all hosts are created equal. When it comes to Managed WordPress Hosting, that fact is becoming ever more clear to us with near-daily sales calls from people with business-critical WordPress sites at a loss because their current host is failing them. This is a conversation that happens all too often, so we’re addressing it here in hopes that we can reach a few more of you experiencing this, and offer an action plan (or, at least, a glimmer of hope). All is not lost. There are lots of reasons why your current host might stop working for you. For example, the exciting time comes when your site grows and if you’ve been using shared web hosting that no longer works. Or maybe your host lacks the WordPress fine-tuning in their infrastructure or hosting stack, so your site isn’t operating at maximum capacity based on the CMS you’ve chosen. It could also be something like your host lacks the support knowledge to identify an issue and help you optimize; they just try to shamelessly upsell. More basic? Maybe the infrastructure of the hosting company itself is changing, or you just simply don’t share the same morals and ethics. Whatever the reason, they’re all frustrating. The good news? They’re all avoidable. Migrating hosts can seem like a big task to take on, maybe even something you avoid, but the benefits outweigh the extra work. Here’s your survival guide to getting it done with the least amount of friction to your day-to-day. Why your current web host might be failing right now In order to put together an action plan, you first need to understand what about your current situation isn’t working. Here are a few scenarios to get your brain thinking. You should think granularly here, because the more specifics you have, the better you’ll be able to choose another provider for your migration. Your host doesn’t provide true managed WordPress services. Patches and updates are applied without proper testing and QA. Your host doesn’t provide a deep level of DevOps knowledge. They cannot help you troubleshoot the bottleneck in performance. Your host has sold you on a physical server that runs outdated hardware and you can no longer scale or keep up with demand. Your host sold you on shared containerized hosting and the shared VPS cannot keep up with the demand of your site and the other customers on the server. Issues like this can’t be ignored. The security, performance, and scalability of your site are fundamental benefits to using a top-notch managed WordPress host. If the host themselves is becoming a blocker in any of those areas, it’s undoubtedly time to migrate on. What to prepare before moving to a new host This checklist will help you get the basics together before you decide on a new host. Use these findings as conversation points in any meeting you take, the faster you understand a new host’s ability to meet your needs in these areas, the better you’ll be able to understand the best fit for you. Understand how many WordPress apps you have and if they are single WordPress installs or multi-site installs. How much disk space do all of your WordPress websites consume? How much disk space your database consumes? Access to migrate the database via PHPMyAdmin or command line via SSH access. Do you have any custom code outside of the WordPress directory? Have the answers to these questions at the ready, and bring each one up during any sales calls you take. The representative on the other line should have followup questions or solutions at the ready, and be fully familiar with each point you’re presenting. If they stumble at any of these discussion points, that’s a red flag. 3 Things to ask a new web host when you’re in a pinch What if your host fails you at the absolute worst time? Maybe you’re in the middle of a campaign that’s challenging your servers, or your updates are outdated and exposed your visitors to a security breach. This is a stressful situation, and you need to make sure you’re making the right move if you choose to do so at such a delicate time. Our high-level suggestions are simple. First, you want to make sure you’re moving into something more reliable not just affordable. Second, if you don’t have time to put out the fire yourself, find a team to help. There’s no shame (or wasting of resources) in outsourcing when it’s critical to your business. A few questions to ask yourself in the midst of putting out the fire and switching hosts: Does the new web host offer white-glove migration and onboarding services? Do they offer reliable infrastructure and hosting servers? Will there support work with me to identify gaps in performance or security? Keep these questions at the ready, so you can seek answers to them immediately should there be an issue. In all likelihood, if this happens you’ll be distracted, and looking for the shortest distance between where you are now, and fixing the issue. These questions can help you make the actual best decision when your mind is elsewhere. Remember: It’s not your fault your web host is failing If you know your WordPress site inside and out and it’s still crashing, it’s not your fault. Plain and simple. The facts suck, but it’s the sad truth and a common example of what can happen when an industry like Managed WordPress Hosting grows to the point that businesses see an opportunity to exploit users. Most big-box web hosts use flashy features and marketing to get you hooked into a low-price, only to upsell you later. Smooth-talking reps and high design sales collateral doesn’t necessarily translate to excellent hosting (unless, of course, you’re talking to the Pagely reps. We’re all smooth talkers and our hosting, and onboarding, is as excellent as it is flexible and customizable). Most web hosts don’t reveal that you’re not on dedicated hardware and they are sharing resources with other customers, which is absolutely the wrong solution for any user looking to scale and optimize performance. You can’t be bogged down by another customer’s resource usage, it’s a risk you just shouldn’t be willing to take. When it comes down to it, Managed WordPress Hosting means different things to different hosts. Most of them just keep WordPress auto-updated for you, but not your plugins, and they lack the knowledge to truly optimize and secure your site. For small scale blogs and businesses just getting started, that might work. For you? It won’t. If we leave you with one thing and one thing only, don’t be afraid of migrating hosts if your running into issues with your current provider. The success of your business cannot wait. We’re happy to dig into these points more with you. Contact us here, or leave us a question in the comments. Need help evaluating your WordPress Hosting infrastructure? We offer a free WordPress readiness checklist that you can request here.
Microservice Architecture (a.k.a. “Microservices”) is a method of developing software focused on building single-function modules. The term was coined in 2011 and the microservice approach quickly gained steam near the end of 2013: The advancement of microservices was driven, in part, by tech giants like Netflix, Amazon, Twitter, Google, and Uber evangelizing its benefits. What’s all the microservice buzz about? Why is anyone interested in microservices? To answer that question we need to briefly explore the evolution of software architecture. Monoliths Not long ago, most software was “monolithic.” A monolithic design means the user interface and the data access code are combined into a single, self-contained program. Monolithic software design comes with some downsides. Most notably, because of the self-contained, interdependent nature of monoliths, a failure anywhere in a program can cause the whole application to come crashing down. And, there are other issues: Reduced agility. The application’s codebase is released all at once. Developers need to code and deploy the entire stack. Rebuilding the whole application takes time and slows down the pace at which new features can be delivered. Complexity. Large applications become harder to understand and work with. There are side effects of changes that might not be obvious due to easy-to-overlook dependencies leading to “mysteries and magic” that can be difficult to untangle. Less scalability. Scaling becomes a matter of adding new instances to run the entire codebase rather than adding instances to scale individual pieces of the app that might be used more than others. Unequal usage of that application can lead to some resources being wasted and other resources straining the server. Service-Oriented Architecture To overcome these problems, a Service Oriented Architecture (SOA) was introduced and became popular in the early and mid-2000s. The SOA views an application as a collection of services running in parallel and connected by application programming interfaces (APIs). Each service is self-contained and represents a specific business activity with a desired outcome. This approach helps solve many of the downsides inherent to monolithic software designs: Easy maintenance. Distinct services can be updated without affecting other services. Platform independence. You can create an application by combining services that use different solutions. Improved reliability. It is easier to locate the problem and debug a small service than it is with a monolith and a single service going down usually won’t take down the entire application. Scalability. Individual services can scale by adding server resources as demand increases. The Microservice Architecture (MSA) is a variant of SOA. In fact, some people argue that MSA and SOA are not really different things at all, so the term “microservices” is superfluous. Nevertheless, the term seems to be here to stay. What are microservices? According to Jerry Andrews, when people talk about MSA they’re generally referring to software design that includes these characteristics: Services typically rely on other services to accomplish their goals Services are often containerized using a container tool such as Docker but they could also run in a virtual machine (VM) Asynchronous communication is handled via messaging or job queues and synchronous communication is handled via HTTP with JSON messages Instance management automation Auto-scaling components There is some debate on the exact definition and some would argue with aspects of the defining characteristics repeated here. SOA and MSA are also differentiated by how granular services are: Source: Dzone There are also differences in how individual services are conceptualized. SOAs view services through distinct lenses: Business services Enterprise services Application services Infrastructure services In contrast, MSA views services purely as functional (without viewing them through the separate lenses SOA does). Microservices might be created for discrete functional services like: Authentication Email sending Account management Account creation Billing Messaging Microservice architecture versus APIs When learning about microservice architecture there is often some confusion on how it’s different than an API endpoint. The clearest way to express the difference between these two terms is to point out that an API is about communication and a microservice is about how the software is organized. An API allows an app’s data to be accessed or instructs the app to perform a function. This communication can happen either internally (in the case of a private API) or externally (in the case of a public API). These functions could be: Making changes to data Pull data out of one app so it can be brought into another Telling the app to create a new user Triggering the app to send out an email A microservice may leverage APIs to allow different services to interact but the term refers specifically to how an application is divided up. If the difference between an API and MSA is still unclear, here’s a great video from IBM that explores the differences in more depth: Benefits of Microservices The self-contained nature of individual microservices introduces a variety of benefits: Simplicity. Microservice architectures are usually more simple so they can be easier to build and maintain. Autonomous cross-functional teams. Technical decisions can be made faster and by smaller groups. Flexibility in technology. Like the SOA approach, microservices allow a blend of solutions to work together. Flexibility in scalability. Microservices can scale independently so additional resources can be added exactly where they’re necessary. Code can be re-used. For example, here at Pagely we were able to use some Pagely-made microservices to power our new serverless hosting solution, NorthStack. Source: Skelia The downsides of microservices While many companies are refactoring their monoliths in favor of a microservices approach, microservices aren’t always the right answer. It’s not the one true perfect way to design an application that everyone should always use. In many cases, the upsides of MSA outweigh the downsides, but sometimes not because: Microservices can introduce complexity. A monolith’s complexity comes from how challenging it can be to understand how different code interacts, MSA’s complexity comes from having more and more of the code split out into individual services. Five or six services aren’t difficult to manage, but twenty, thirty, or more can be! Microservices require changes to culture and process. Companies that adopt microservices will also need to adopt an agile coding approach and, often, create a DevOps team. Microservices can be costly. Network calls made by APIs aren’t free and add up. In addition, the labor costs of breaking up a monolith can create an otherwise unnecessary expense. Microservices may introduce security issues. Each inter-network communication path creates a new security risk that needs to be addressed and monitored. Source: Tiempo Development Moving from monolith to microservices Many applications begin their lives as monoliths, but as they mature bottlenecks that should be split out into microservices become apparent. This refactoring process is known as “breaking up the monolith.” The two architecture approaches are not an “either-or” thing or a “right or wrong” thing— many popular apps are a monolith-microservices hybrid. Starting from scratch: Monolithic versus Microservice architecture While microservices offer distinct advantages over monoliths, building software from scratch as a monolith and then moving toward a microservice approach may sometimes make more sense: Developing an application as microservices will usually be slower (initially) than taking a monolith approach As the monolithic software matures and the big picture becomes more clear, the functions that should be split out into microservices will be more obvious. A well-designed monolith can easily be broken out into microservices later. If development time is a non-issue, taking a microservices approach from the beginning avoids the effort and expense of later refactoring. You could weigh the pros and cons of absorbing the time costs of starting with microservices versus delaying those costs and beginning with a monolith. The right answer will depend on the priorities of the company and its project.
Traffic surges and overall growth are an indicator of business success. A big splash of attention resulting in lots of people coming to your site means you’ve done something right, but you’ve also got a lot on the line. You want to make sure your site keeps running fast no matter how many hits it gets. As your site traffic grows, it puts more load on your server resources. If you’ve got limited server resources, then you can’t scale. These principles are simple enough to understand, but what about when you’re serving content at scale and dealing with the most demanding applications? The technical fundamentals of a well-performing site must be in place. In reality, this is essential for all sites but, it’s a non-negotiable for business-critical sites that are prone to traffic surges (looking at you, ecomm and elite publishers). PHP workers and CPU core counts often get thrown around with hosting, but there’s a misconception as to what actually matters for scaling a site. So, let’s talk about performance and dig into PHP. Performance Rules There are a few simple rules that your site’s performance is based on, which we’ll go into more detail on later. For now, it boils down to three things: Use the caching layer, whenever possible. Don’t have slow database queries. Do as little as possible in PHP. Scaling vs. Performance Planning for scalability shouldn’t sacrifice the performance of your site. Performance = how long it takes to serve a request = low latency Scaling = ability to handle more requests at once = high throughput To illustrate by example, this is our robust and scalable stack which lets you sustain massive traffic without sacrificing performance. Digging into PHP Things that are single threaded, like the most common installation of PHP, don’t perform better based on more CPU cores. After all, each request can only run on one of them. But, extra cores when cache + PHP + MySQL are all on the same server can help by reducing resource contention. As it relates to hosting, the focus can often be on the wrong metrics here: number of PHP workers. In reality, the number of workers doesn’t actually matter, as it’s only a tuning parameter. The key is in the number of CPU cores you get. There is an exception here though, and that’s if your hosting stack is just Apache + mod_php. In this case, if you run out of PHP workers, you get an error. With PHP-fpm, Nginx-unit, Litespeed lsapi, or stacks with varnish/Nginx in front of Apache then extra requests can be queued between the different layers making the number of workers immaterial to scaling. Our goal with PHP is to have the right number of workers to keep our CPU cores at 100%, all the time. But it’s not only PHP running on servers, often times we’re seeing: A kernel, doing memory management, TCP stack Some logging infrastructure Some monitoring infrastructure In some cases, an Nginx cache (this is the standard Pagely setup) and it might have an additional logging infrastructure In some cases, a Redis object cache (also standard at Pagely) In some cases, a MySQL database (not at Pagely, but this is the case at many VPS providers) Generally, this doesn’t add up to a lot, unless the database is on the same server. At very high traffic sites Nginx, logging, Kernel, and Redis can add up to a lot, so on an 8 or 16 core server doing a lot of traffic you might want to plan for PHP to use 75% of the cores. So if your page just uses PHP, doesn’t use MySQL, doesn’t talk to any APIs (making http requests somewhere), doesn’t use Redis, then things are pretty simple. 1 PHP worker per core would suffice and it would run at maximum efficiency giving you the least amount of overhead. That means the best performance (lowest latency) and the highest throughput (average requests per second). But that’s not the real world. In the real world, we need to talk to data stores, read files from disks, and make requests to APIs. So, we generally want to be in the 2-4 workers per core range for WordPress. A few exceptions to that include: If you do a ton of very fast, very simple requests you might want to be higher, because there’s some overhead to the request read/respond cycle that is doing IO between, say, Nginx and PHP-fpm, and we need extra workers to keep that busy. If you’re building an application hitting a bunch of APIs, or making a bunch of slow database connections, you might want even more. As a baseline at Pagely, we generally don’t run things higher than 4 workers per CPU core. So what is the downside for optimizing PHP for efficiency? If your site suddenly gets much slower, due to a slow external API call, you will no longer be maxing out your CPU since you’ll be spending so much time waiting. The upside here is that more workers don’t help you much anyway, and having a bunch can expose you to lots of interesting server death conditions due to running out of RAM. Stay tuned for a future post on these server death conditions… we’ll get to it someday soon. TL;DR When it comes down to it, you want to optimize PHP so that it uses your target number of CPU cores without wasting resources. This ensures: Best performance (lowest latency) Good reliability (low chance of running out of memory) Best efficiency (highest throughput when maxed out) Optimizing Your PHP Code for Higher Performance In general, optimizing your PHP code for higher performance fits into two categories: Running less code Making fewer calls to external resources. 1. Running Less Code Running less code is done in both the obvious way; turning off plugins where the feature adds little value and, the less obvious way; transient caching and code optimization. Transient caching is pretty straight forward: Find a section of code that takes a lot of time to produce some HTML. Figure out if it’s personalized, or can be cached globally or as some group of variants. Use the WordPress Transients API to cache that HTML. A tool like New Relic can help you identify slow chunks of code, so you can make those fixes efficiently. Transient caching can also be used for datasets instead of just HTML but, in general, the closer to final output that you can do caching, the more impact you will have. Code optimization is harder, which is where you likely need a developer to dig into things. It might be as simple as switching to a more efficient algorithm, but often it will take a larger rethink and rework of how things are being done. 2. Making Fewer External Calls When you make a MySQL query or an http request to an API like Twitter, PHP sits and waits for the response. You want to do less of this. The Transients API can be the answer here, caching the data from API calls where it doesn’t need to be fresh. On the database side, you might be able to fetch less data from smart page design. Does a page really need to show content from 100 posts at once? Probably not. I spoke on how to determine these problems and presented strategies for solving them at WordCamp Phoenix. You can watch the whole talk below: TL;DR Huge wins are possible, but look for easy wins from transient caching and design first. When it comes to your site’s performance, and how PHP effects that, it’s all about being smart. Understand what’s needed in terms of PHP worker counts vs CPU cores, and how to use your resources efficiently without extra code hanging around to slow things down. If it’s over your head, talk to a managed WordPress host that can help alleviate the pressure and get things running smoothly. https://pagely.com/wp-content/uploads/2019/06/Performance-PHP.mp4
“The cloud” changed the game so much, and so quickly, it’s difficult to remember what things were like before it arrived. But, not that long ago, getting server resources meant buying or leasing an actual box off the rack of a server farm. It wasn’t instant. It wasn’t cheap. And it usually required a great deal of technical knowledge. “The cloud” freed us from the constraints of working with individual boxes. Instead, it offered an immense pool of resources that could be borrowed from as needed. Renting “shares” of the cloud eliminated the need to invest in hardware upfront and provided near instant scaling. Its value was a one-two combination of economies of scale and elastic autoscaling driving down the cost and “infrastructure as a service” lowering the knowledge barrier. We haven’t looked back since. One of the most important cloud services is Amazon Elastic Cloud Compute (EC2) which was introduced in 2006. EC2 quickly became one of the most popular AWS services and remains so today. What is EC2? You can think of EC2 as the cloud equivalent of a server. A server, broken down to its essential elements, is a combination of storage and compute resources that can be remotely accessed. What you’re actually getting with EC2 is a virtual machine which AWS calls an “instance.” It includes all the resources of a physical server but it’s not a distinct box — it’s a slice of a much larger pie. Functionally, it’s the exact same thing but behind the scenes it’s more like tapping into an ocean of resources. There aren’t really any downsides to using a virtual machine versus a physical machine. In fact, there are only advantages. You can get as much, or as little, compute capacity as needed for your project and scale up and down as your needs change over time. And, it’s really easy to do so. EC2 is a web service so you can log in and create a new EC2 instance quickly and without the technical knowledge that would normally be required to provision a server. Why use EC2? In less than 10 minutes you can rent a slice of Amazon’s vast cloud network and put those computing resources to work on anything from data science to bitcoin mining. EC2 offers a number of benefits and advantages over alternatives. Most notably: Affordability EC2 allows you to take advantage of Amazon’s enormous scale. You can pay a very low rate for the resources you use. The smallest EC2 instance can be rented for as little as $.0058 per hour which works out to about $4.18 per month. Of course, instances with more resources are more expensive but this gives you a sense of how affordable EC2 instances are. With EC2 instances, you’re only paying for what you use in terms of compute hours and bandwidth so there’s little wasted expense. Ease of use Amazon’s goal with EC2 was to make accessing compute resources low friction and, by and large, they’ve succeeded. Launching an instance is simply a matter of logging into the AWS Console, selecting your operating system, instance type, and storage options. At most, it’s a 10 minute process and there aren’t any major technical barriers preventing anyone from spinning up an instance, though it may take some technical knowledge to leverage those resources after launch. Scalability You can easily add EC2 instances as needed, creating your own private cloud of computer resources that perfectly matches your needs. Here at Pagely a common configuration is an EC2 instance to run a WordPress app, an instance to run RDS (a database service), and an EBS so that data can easily be moved and shared between instances as they’re added. AWS offers built-in, rules-based autoscaling so that you can automatically turn instances on or off based on demand. This helps you ensure that you’re never wasting resources but you also have enough resources available to do the job. Integration Perhaps the biggest advantage of EC2, and something no competing solution can claim, is its native integration with the vast ecosystem of AWS services. Currently there are over 170 services. No other cloud network can claim the breadth, depth, and flexibility AWS can. What can you do with EC2 instances? Anything you can do with a computer, you can do with EC2. Data scientists use EC2 instances to crunch large data sets. Animators use EC2 instances to render 3D worlds. At Pagely we use EC2 instances to serve websites for our Managed WordPress Hosting customers. Instances come in a variety of different hardware configurations (memory and CPU) called “types” and are grouped into six families. Each family has a variety of instance types that have resources optimized for specific use cases. To understand all the different ways EC2 instances can be leveraged, it can be helpful to explore the available instance families so you understand what’s available and how it can be put to use. The first letter of the instance type indicates which family it is in. For instance, a c5.2xlarge is in the compute-optimized family (as indicated by the “C”) and a p3dn.24xlarge is in the accelerated computer family (as indicated by the “P”). General purpose instances A, T, and M General purpose instances are simple instances with modest options best suited for testing environments. Generally, you would not want to use them as a production server. M instances offer more balanced resources and are suitable for more general use (they’re powerful enough to be used for production on a budget). Compute-optimized instances C Compute-optimized instances offer a low price per compute ratio making them a cost-effective solution for compute-heavy applications like web servers, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding. Accelerated computing P, G, F These are GPU optimized instances for graphics-intensive applications or GPU compute applications. These instances are well-suite for speech analysis, machine learning, 3D rendering, and genomics research. Memory optimized instances X1e, X1, R Memory-optimized instances are for applications that don’t require a lot of compute resources and instead require a lot of RAM. Often these instances are useful for data science applications — such as data mining and data analysis — where the data set can be stored in RAM. Storage optimized instances H, I, D Storage optimized instances deliver high disk throughput, low latency SSD, or HDD storage for a variety of storage heavy applications like distributed file systems, big data workload clusters, data processing applications like Apache Kafka, elastic search, and NoSQL databases. The range of hardware options on these instance types can be overwhelming. You can use a resource like https://www.ec2instances.info/ to make comparisons and find the hardware that’s best for your application. Persistent storage with EBS An instance can be launched with its native storage (boot disk) or, optionally, Elastic Block Store (EBS) can be added as a service. EBS’s main advantage is that it can be easily attached to any instance. Using the storage provided by an instance makes it more difficult to share data with an instance and once that instance is turned off that data is no longer available (this is called ephemeral storage). Pricing Amazon provides a generous free tier for exploring many of its products including 750 free hours over 12 months for EC2 instances. When you’re ready to purchase an instance they offer three options: On-Demand Instances Pay only for what you use. No long-term commitments or upfront payments. Reserved Instances One-time, upfront payment for a period of 1-3 years. Machine is always on. Save a significant amount of money by pre-paying. Spot Instances Purchase unused EC2 instances at a discount for significant cost savings. Machine could be taken down by Amazon so not a good option if continuous uptime is important. So to simplify things, a reserved instance is great if you have a long-term need for an instance that is always on. If you want to save money, you can go with an On-Demand or Spot instance but if it’s important that machine be available when you need it, On-Demand is your best option. This flowchart from Cloud Academy summarizes this nicely: Source Why we use EC2 at Pagely Here at Pagely we use C5 instances and VBurst instances. We analyzed many instance types and found these best suited for the unique resource demands of WordPress. Along with AWS RDS as our database solution, they allow us to serve web pages quickly and reliably. The reason we chose EC2 over competing solutions can be boiled down to one important advantage: EC2 is part of the AWS cloud ecosystem. No other cloud solution comes close in terms of number of services (over 170+ and counting). The breadth and depth of services offered by AWS is currently unmatched. There are less expensive cloud providers on the market but we chose the AWS ecosystem because it offers our customers unparalleled flexibility. All those services and instance types allow us to meet the demands of any customer — whether they’re a blogger, an ecommerce store, or an international enterprise with unique and demanding requirements. AWS can do it all — reliably and securely.
Site reliability is generally talked about less often than other performance indicators, like page speed. With many web hosts offering multiple 9’s of guaranteed uptime, it’s become less of a ‘hot’ topic and more of the norm that your site will be up 99.99x percent of the time. But what many people don’t realize is that there’s more to reliability than uptime. One of the most important pieces of the reliability puzzle is Data Reliability, which generally refers to the protection of both your database and your site files. Posts and pages are often the most important part of WordPress, so it makes sense that protecting both should be a priority. There are situations, both catastrophic (think natural disaster in the region where a data center is located) and more common, day to day occurrences, that can endanger your data. Your host’s configuration plays a massive role in how you are protected from both. A few minutes invested into ensuring your host has adequate data reliability protections in place can prevent a disaster. Let’s explore how server configurations can help or hinder Data Reliability, any trade-offs those optimizations can produce, and how to find the right balance between data reliability and other performance factors based on your organization’s needs. The Container Model The Container Model is fairly straightforward: all site files and the database live on the same machine, allowing for simpler management and slightly lower latency between assets, like the site files and the database. But when something goes wrong on that single machine, you can imagine how a small issue can escalate into a larger one. We’ve seen cases where sites hosted on containerized platforms (that are hosted on other platforms) experience a spike in traffic, PHP uses all the system memory, and then the machine locks up, corrupting the database tables. On a larger site, that could mean an hour or two of recovery time before they’re back up. They may have lost data, not to mention lost business, in that case. If Google, or another search engine, attempts to index your site during this downtime, it could impact your site’s ranking because they do not want to serve unreliable sites to their users. Downtime + lost data = risky business This risk becomes even more significant when other sites are sharing those same resources, and backups are stored on the same machine. The Dedicated Resources Model The alternative to the grouped, container model is what we call the Dedicated Resources model. By keeping the database, code, and backups on separate, dedicated server instances, we’re ensuring that any failure or issue with one asset has minimal impact on the remaining assets. This approach creates a superior configuration compared to sharing the same server, in terms of both data reliability and performing at scale. How Pagely isolates & connects server assets. Explore the interactive version here. If the Dedicated Resources approach makes sense to you, these are the questions to ask of your current and potential hosting solutions: Are there separate resources dedicated to your code and your database? What deliberate hardware decisions have they made to protect your data? How do they manage backups and recovery? Are backups stored separately? If they aren’t actively addressing these issues, you could be looking at not only potential downtime and data loss, but scalability challenges as you grow. The Trade-off We’ve established that having the database and app on separate server instances (separate virtual machines) is ideal for overall reliability, but this server utopia comes at a cost: latency. Because the resources are separated, this configuration does result in slightly higher latency between the two resources, and this impacts page speed. But the real-world performance difference caused by this latency is negligible (typically in the 1-2 millisecond range,) as long as there are no major issues with the site code or plugins. This is where the power of a reliable managed hosting partner comes into play – to find and help you fix problematic code issues. It’s something that Pagely does for every customer, and this is a level of service most other companies do not offer. How Important is Data Reliability to You? Depending on the complexity of your WordPress website and the requirements of your business, the answer here is usually ‘very important.’ With that answer in mind, it’s a good idea to ask your team the following questions to help you determine where your priorities lie: Do we have noticeable performance issues – and if more hardware isn’t the answer, are we willing to address the root cause of those issues? Can we quantify the value of our data reliability and uptime? Are things like page speed empirically more or less valuable to us than data reliability, scalability, and flexibility? Can we achieve both (better performance and data reliability) at above-acceptable levels with our current provider? That last question is at the heart of what we’ve dedicated the last decade towards solving at Pagely. Yes, we’ve made hard-but-smart decisions around the trade-offs that increase reliability within WordPress. And in doing so, we’ve balanced all key performance and stability metrics into one elevated metric that larger organizations strive to achieve: scalability. By working with your team to improve your code and customize your server configuration to match your specific needs, we’ve created real-world solutions that can scale as you grow.
A fast loading website has many benefits for both you as a website owner and your visitors. Slow site speeds have been shown to not only hamper the user experience, resulting in reduced conversion rates and higher bounce rates, but it can also negatively affect the position of your website in the search engines results pages. While upgrading to a high-performance web host is a quick and effective way to improve website loading times, there’s also a free plugin you can install on your WordPress website which can help your pages load faster. That plugin is Smush, and in this post, we’ll be discussing how it can help you effortlessly optimize your images to avoid frustrating your visitors with slow loading content. Why Use Smush to Automatically Optimize Your Images As mentioned, Smush is a free image compression and optimization plugin for WordPress. This plugin is maintained by the good folks over at WPMU DEV and is loaded with nifty features, including lossless compression, bulk smush (not to be confused with ‘Hulk smash’), incorrect size image detection, and automated optimization, to name a few. All of these features combine to help web pages that load faster, without any reduction in image quality – the only thing you have to do is install the free plugin. Now each time you add a new image to the WordPress media library, it will automatically run through the Smush service, reducing the file size – without any visible loss in image quality. One of the other great things about this plugin is that it can optimize all of the existing images on your site without you having to re-upload them (limit of 50 without a WPMU DEV subscription.) How to Use Smush on Your Website As the Smush plugin is free to use, it can be installed on your site directly from the WordPress plugin directory To do so, log into your site’s admin area and then navigate to Plugins > Add New using the sidebar menu. From the Add Plugins screen, enter ‘smush’ in the search field and then install the first item listed in the results. Once the plugin has been installed and activated it will run in the background and whenever a new image file is added to your website, it will be optimized to load as fast as possible, without any loss in quality. The settings for the plugin can be found in the Smush link in the main menu of the WordPress admin dashboard. From there, you have the following options: Bulk Smush: Optimization of existing images in your media library (limit 50 on the free version) Directory Smush: Optimizing images outside of your uploads directory Integrations: Integrates with Gutenberg, Amazon S3 and photo galleries CDN: A premium feature that utilizes WPMU DEV’s CDN for multi-pass lossy compression and auto resize features Lazyload: This feature defers the loading of below the fold imagery until the page has loaded, adjustable by page/image type. Settings: General plugin settings Conclusion By using this free plugin and image optimization service, you can ensure that all existing and future images that you use on your website will load as fast as possible. This will not only help your website load faster, but can also help reduce the amount of bandwidth your site uses and how much data your visitors need to transfer in order to view your content. Note that there are more advanced features that require a WPMU DEV subscription, like bulk updating more than 50 images, updating the original image on the server, and more. Loving Smush? Have an alternative to suggest? Comment below.
There’s a lot of hype around page speed, and the hype is usually centered around 2 main points: SEO: The Google algorithm measures page load time as one of its many search ranking factors UX – User Experience: It’s well documented that slow loading websites create a poor user experience, increase bounce rates, and lower conversion rates For those reasons, optimizing your WordPress site for speed is generally worth the time and effort, and here are a few simple steps that you can take to get started with speed optimization. Measuring Page Speed Speed is important, but testing your speed can be tricky. There are a variety of tools and methods to test page speed, but most test your site in ideal circumstances, and no two tests are the same. One solution: find a benchmarking tool from a reputable source, and then test multiple WordPress sites (including your own) to get a sense of what is good, bad, and ugly. By comparing the benchmarks of your site against other reputable WordPress projects, you’re now able to set a reasonable goal for your efforts. Improving Page Speed Caching is generally the defacto first solution for improving page load times, as serving traffic from the cache can be up to 1,000 times more efficient than serving it from PHP. Say you serve your site without any caching: it might take 1500 milliseconds to generate that page, where serving that same page from the cache might take 5 milliseconds. Using Pagely’s VPS1 solution as an example, you might be able to do 8 PHP requests a second without caching, while you can do 8,000 requests a second with caching. Pagely High Availability VPS Hosting Tech It’s important to note that larger, more complex sites often have specific caching requirements, where it makes sense to cache some things while leaving others alone. This customization is typically more resource-intensive. But, if your managed hosting team and platform can help you move 20% of your traffic to the cache via a custom-built cache key and additional capabilities, then that can make up for the fact that your dedicated hardware is more expensive. Check Your Code Your code and plugins play a substantial role here as well. If your project is laden with inefficient code, throwing more hardware at the problem isn’t a financially prudent, nor scalable, solution. You need to fix those inefficiencies, and a good hosting provider can help with hands-on guidance. Optimize Your Images This might be common sense, but it’s worth noting: large image files are page speed killers. Quick tips for images: Upload images at the smallest dimensions required by your design Compress images locally before uploading, or run a server-side compression plugin Use the SVG format for graphics when possible Upgrade Your Hosting Where you host WordPress plays a substantial role in how fast your site will load. Many inexpensive hosts use outdated hardware and jam as many accounts onto a single server as possible, leaving fewer resources available to you. These same hosts are often vague about how those resources are allocated and what their infrastructure looks like, making it hard to know where you stand. If you don’t know how your host prioritizes the resources running your project, how can you compare them to others or effectively improve your page speed? What to Look For in a Fast WordPress Host When reviewing a current or potential hosting partner, transparency is key. Building on that, your host is only as fast as the infrastructure their tech is built upon, so look for a host with reputable technology partners. Arguably one of the most well-known names in hosting is, of course, Amazon. Amazon powers major brands like GE, Expedia, Pinterest, LinkedIn, Dow Jones, Adobe, Pfizer, and even NASA by supporting the best hosting companies with 64-bit machines in their data centers. So, what tools should your hosting provider have in place to keep your site running quickly? A CDN A content delivery network is a geographically distributed network of proxy servers and their data centers. By serving a user content from a location physically closest to where they are, your site performs faster and loads better for each visitor. For example, at Pagely, our CDN, PressCDN, is built specifically for WordPress and services clients across five continents, serving photos, images, and static assets from a global network of servers. This means faster download speeds for your site visitors. PressCDN Sample Dashboard with Statistics A Web Server Full Page Cache With a proper caching mechanism in place, content is cached at one or several points of presence (POP). When a request is made for a site, the POP closest to the user delivers the cached page. This means faster speeds to every visitor. To use Pagely as an example again, we built PressCACHE as a global WordPress acceleration system which works much like a CDN but is specifically designed to cache and serve WordPress page output. This makes your site available instantly, from anywhere. For more on caching, check out Maura Teal’s talk on The Balancing Act of Caching in WordPress. A PHP Stack Optimized for WordPress There are several small PHP concepts that add up to improved performance and speed. First, tuning PHP for WordPress typically involves the right number of php-fpm works for the hardware to give maximum throughput while limiting overhead and minimizing memory use. Second, running up to date versions of PHP so you get the newest performance enhancements. And last, configuring the support infrastructure for maximum benefit, OPcache, and object cache. Bonus: the better hosts will make individualized tuning changes for your site. A Hardware Stack Optimized for WordPress The hardware a host uses varies widely by host, and not all are transparent about how they configure resources, who their backbone provider is, etc. Generally, you want to separate resources, like site code, database, and backups. And there are benefits with database options, like data reliability and scaling. The bottom line: there’s no reason why your host shouldn’t have protocols and tools in place to earn your trust as a fast provider. The examples listed here should help you get a sense of how your host stacks up. If they have a logical answer to each area, you can rest a little easier. Bonus: Look for a host that utilizes Nginx for caching NGINX is a fast, high-performance web server that accelerates content & application delivery, improves security, and facilitates availability and scalability for half of the world’s busiest sites. It can also be utilized to cache pages and assets. By placing Nginx in front of the traditional LAMP stack WordPress is installed upon, it’s utilized to cache pages and static assets for faster load times. Combine that with Redis for object caching, and PressCDN for content caching around the globe, and you have a recipe for some of the fastest WordPress hosting available. Here’s an image showing how we default to Nginx cache node with Apache behind it serving PHP at Pagely, though you can also opt for Nginx only. One Last Thing to Consider While page speed is important, your goal should be to improve larger issues that substantially affect page speed, rather than trying to achieve the perfect benchmark score or shave 1 millisecond off of your load times. In fact, you might even consider that 1 millisecond a vanity metric. Is that 1 millisecond really going to be noticeable to your users, or is it a metric you can throw into a report to make your team look good? It’s clearly much more important to focus on systemic improvements that lead to measurable outcomes. The quality of your code and the capabilities of your hosting provider, and the tools they utilize is a great place to start for this. Solve real problems and don’t chase perfection, in other words. And don’t sacrifice performance or data reliability for the sake of speed.
As innovators in the hosting space, we’re constantly testing hardware and software solutions here at Pagely to find the optimum balance between price and performance for the unique demands of WordPress. Through all of that testing, Amazon RDS emerged as the clear winner for our database solution. We’ve built our hosting solution on it so it’s fair to say we’re fans. In this post we’ll give an overview of RDS, how it works, and its features and benefits. For us, as it does for many other businesses, it comes down to a mixture of reliability, flexibility, and, most importantly of all, it integrates seamlessly with the powerful AWS cloud ecosystem. What is RDS? When Amazon Relational Database Service (RDS) burst onto the scene in 2009, it was immediately hailed as a game-changer. It was a revolutionary “database as a service” solution that made it simple and easy to set up, manage, and scale a database that included built-in scalability, redundancy, and failover protection. One of its biggest innovations was separating the database from the server. “When you buy a server, you get CPU, memory, storage, and IOPS, all bundled together. With Amazon RDS, these are split apart so that you can scale them independently. If you need more CPU, less IOPS, or more storage, you can easily allocate them.” Source RDS is a managed solution so you won’t have shell access and there’s restricted access on advanced privileges but the majority of what you would need to do is handled for you so this is a non-issue in most cases. RDS versus other cloud database solutions When it comes to cloud database solutions, AWS is the industry leader by a landslide: Amazon RDS market share Source While alternatives from Google, Microsoft, Oracle, and IBM exist, they have not become as popular in spite of aggressive pricing aimed at stealing market share from AWS. RDS’s preeminent advantage over other cloud database solutions is that it integrates seamlessly with AWS’s robust ecosystem of cloud-based tools, services, and solutions. AWS is far and away the leading cloud provider with the most powerful, reliable, and flexible suite of cloud services that integrate flawlessly with one another. Today, Amazon offers an incomparable array of over 170 cloud services and continuously introduces more to complement existing services and add new functionality: EC2 Instances Amazon Simple Storage Solution (S3) Amazon Aurora AWS Lambda Amazon LightSail Amazon CloudFront Amazon Elastic Block Store (Amazon EBS) Amazon Route 53 See the full list of services here Cloud services like these have become the building blocks of the internet as we know it today. These services can be combined to create powerful new solutions that are greater than the sum of their parts, like our Managed WordPress Hosting. How does RDS work? RDS operates within an instance (an isolated, cloud-based database environment). When you create a new database, you choose the database engine it runs. RDS is compatible with the most popular engines: MySQL MariaDB PostgreSQL Oracle Microsoft SQL Server Aurora The computation and memory resources allocated to the database are determined by what AWS calls its “instance class.” As a database grows, its instance class can easily be upgraded with very little downtime to provide more resources making it a highly scalable and flexible solution. There are currently 27 instance classes to choose from with a range of resource options. Instances can have as little as 1 GB of memory up to 256 GB and provide a single processing core up to 64 cores. There’s an instance class to fit pretty much every use case and all of these instance sizes are available to Pagely customers so RDS is part of what makes Pagely such a flexible, customizable hosting solution for companies with unique technical demands. High availability For enterprises that rely on their databases, one of the most important features offered by RDS is its Multi-AZ (Multiple Availability Zone) option. Two distinct copies of the database are created — a primary that handles read and write requests and a secondary that is only written to. If there is an availability issue, the secondary database is promoted into the primary role and the traffic is re-routed using DNS. It’s important to note that the Multi-AZ feature is not a scaling solution because the secondary database cannot serve read traffic. For scaling, Amazon has Read Replicas. Read Replicas Using Read Replicas “you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.” Source To get started with Read Replicas you first pick a database that will operate as the source. A snapshot is created to duplicate the database and that duplicate version is updated whenever there is a change to the source database. Note that Read Replicas are only compatible with MariaDB, MySQL, Oracle, and PostgreSQL engines. Benefits of RDS RDS’s feature set makes it an attractive database solution: A managed solution AWS takes care of backups, software patching, automatic failure detection, and recovery so your administrative burden is close to nil. They have a nearly impeccable track record in this department so you can rest easy knowing your database is being managed by experts. High performance RDS uses Solid State Drives (SSDs) to achieve high input/output (IO) throughput and also has an option for Provisioned IOPS (SSD) Storage, which allows you to specify a target input/output operations per second (IOPS) rate when creating the database instance. Scalability RDS users can easily and quickly scale their compute and storage resources with very little downtime. They also offer read replicas that allow users to scale out read-heavy databases. Superior availability and durability RDS offers automated backups for point-in-time recovery for a user-specified retention period. This means you can restore to literally any second within the retention period. In the event of a hardware failure, AWS will automatically replace the hardware on that compute instance. Robust security If an instance is running with RDS encryption, the data, its backups, replicas, and snapshots are all encrypted and SSL is used to encrypt data in transit. Resource-level permissions give granular control over who has access to the database and what capabilities they have. Manageability Amazon CloudWatch is included so you’ll have detailed analytics on your databases’ performance at no additional charge. You can receive text and email alerts through Amazon SNS. AWS Config records and audits changes to the database config to support governance and compliance needs. Amazon’s Performance Insights makes analyzing performance data to tune your database easier than ever. If you’re already using the AWS ecosystem, there’s no compelling reason to choose anything else. Even if you’re not, the RDS solution offers a superior feature set, with a much longer track record of reliability, than competing solutions. Amazon RDS offers many advantages over traditional database solutions. It’s flexible, reliable, and easily manageable. If you’re researching RDS for WordPress website hosting, you may want to consider a Managed WordPress Hosting solution that uses RDS, like Pagely. Pagely masks the complexity and eliminates the time costs associated with managing your own database and server. If you’re interested in hassle-free WordPress hosting powered by AWS infrastructure and optimized by a team of WordPress hosting experts, start a discussion with our sales team or see our plans and sign up here. Make your own process infographics with Venngage.
<p align="justify">This article covers our public notifications related to major security issues our clients and the WordPress community should know about. We are always focused on <a href="https://pagely.com/solutions/secure-wordpress-hosting/">prevention and the mitigation of risk to our clients</a>, and keeping you updated here is part of that process.<!--more--></p> <h3 align="justify">List of Vulnerable Plugins During This Month</h3> <p style="text-align: center"><style type="text/css" name="visualizer-custom-css" id="customcss-visualizer-21824"></style><div id="visualizer-21824-197451391"class="visualizer-front visualizer-front-21824"></div><!-- Not showing structured data for chart 21824 because title is empty --></p> <h3>Plugins Closed by WordPress Security</h3> <p style="text-align: center"><style type="text/css" name="visualizer-custom-css" id="customcss-visualizer-21828"></style><div id="visualizer-21828-1162502956"class="visualizer-front visualizer-front-21828"></div><!-- Not showing structured data for chart 21828 because title is empty --></p> <p align="justify">WordPress security team decides to close a plugin when a security issue is found and the developer doesn’t release a patch in a timely manner. You can read more about this <a href="https://developer.wordpress.org/plugins/wordpress-org/alerts-and-warnings/" target="_blank" rel="noopener noreferrer">here</a>.</p> <h3 align="justify">Relevant Vulnerabilities</h3> <p align="justify"><a href="https://wpscan.com/vulnerability/10478" target="_blank" rel="noopener noreferrer">secure-file-manager</a>:<br /> <b>Authenticated File Upload</b></p> <p align="justify"><a href="https://wpscan.com/vulnerability/10471" target="_blank" rel="noopener noreferrer">ait-csv-import-export</a>:<br /> <b>Unauthenticated File Upload</b></p> <p align="justify"><a href="https://wpscan.com/vulnerability/10457" target="_blank" rel="noopener noreferrer">augmented-reality</a>:<br /> <b>Unauthenticated File Upload</b></p> <p align="justify">These plugins have critical vulnerabilities that when exploited would give an attacker complete control over your website. All of them are closed, which means no new installs are allowed but old installs will work without any issues, thus, please check if you have any of them installed <i>(</i><b><i>even if it’s not activated</i></b>) and remove them from your plugins folder.</p> <p align="justify"><a href="https://wpscan.com/vulnerability/10479" target="_blank" rel="noopener noreferrer">woocommerce-anti-fraud</a>:<br /> <b>Unauthenticated Order Status Manipulation</b></p> <p align="justify">Versions < 3.3 of this plugin have a bug that when exploited could cause unnecessary damage to your online store. An unauthenticated attacker would be able to change the status of all the orders making it difficult to handle them since the data will not be reliable. On <strong>November 23</strong> the developer released a <a href="https://dzv365zjfbd8v.cloudfront.net/changelogs/woocommerce-anti-fraud/changelog.txt" target="_blank" rel="noopener noreferrer">new version</a>.</p>
This article covers our public notifications related to major security issues our clients and the WordPress community should know about. We are always focused on prevention and the mitigation of risk to our clients, and keeping you updated here is part of that process. List of Vulnerable Plugins During This Month Plugins Removed From the Repository WordPress security team decides to close a plugin when a security issue is found and the developer doesn’t release a patch in a timely manner. You can read more about this here. If you are using one or more of the above plugins we recommend deactivating them until the developer releases a patch for the mentioned vulnerability or consider a more reliable alternative. Relevant Vulnerabilities Ti-woocommerce-wishlist : Authenticated WP Options Change A critical vulnerability was found in this plugin that when exploited allows an attacker to: Change the site options Create malicious redirects Escalate privileges (login as an administrator) This issue was resolved in the free version 1.21.12 on October 16, however when checking the premium version we noticed it was still vulnerable and was finally resolved on October 28 after we reported it. More details here. WPBakery Page Builder : Authenticated Stored XSS WPbakery Page Builder former Visual Composer had a medium vulnerability in versions before 6.4.1 that was only exploitable by high privilege users. Nevertheless we recommend all its users to update to the latest version. Loginizer : Unauthenticated SQL Injection Loginizer had an unauthenticated SQL Injection in versions before 1.6.4 caused by a lack of filtering before executing a database query. An attacker just had to craft a request with a malicious username. More information here.