Video Case Study #1: Obsev and DevriX

Obsev.com (OBSessed with EVerything) is a news site that hosts particularly viral entertainment and cultural commentary stories. In November of 2018 they were doing significant traffic (30MM+ pageviews per month) and hitting a wall with scaling challenges. They came to Pagely for help. We paired them up with one of our trusted agency partners, DevriX and began the process of sorting them out. With these performance issues limiting their growth it was imperative that we get them through the wall and remove the scaling barrier that was capping their growth and damaging their brand.

Fast forward to today and they’ve more than quadrupled the traffic levels that were hobbling the site before Pagely. In today’s first of its kind video case study, we’ll delve into each of the challenges that Pagely and DevriX were able to unravel and resolve for Obsev. You’ll hear from Obsev CEO, Raymond Attipa, DevriX CEO, Mario Peshev and Pagely Director of Hosting Ops, Arman Zakaryan. While this interview explores some complex technical topics we’ve deconstructed each and presented them in layman’s terms so that even the most technically-challenged viewer can understand the unique challenges and resolutions.

If you face a serious scaling challenge of your own like this one get in touch with us. We’d relish the opportunity to get you sorted out the same way we helped Obsev overcome their scaling challenges. Cheers!

Show Notes

TimeTopic
0:01:23Welcome and context
0:05:17Teaser
0:06:13Challenge #1: Git-based source control
0:08:26From Pagely’s side what was involved in solving that?
0:09:365 different ways you can use Git on Pagely
0:11:23What was Obsev’s traffic in November when this all started?
0:12:34Challenge #2: Performance issue related to infinite scroll
0:14:53Arman deconstructs the UTM cache key aspect
0:17:03Arman talks through the CDN rewrite aspect
0:18:56What was the crux of the resolution for this?
0:22:13The challenge of bandaiding issues on a site you take over as a consultant
0:23:23What was the hosting configuration prior to Pagely?
0:25:31Obsev quantifies the business impact of the degraded performance from this issue
0:29:25Challenge #3: Image optimization of massive library of images in S3
0:34:13How Pagely’s PressThumb dynamic image optimization library helped fix a coding issue with calling unoptimized images
0:35:43Challenge #4: Solving the massive volume of images storage problem using Pagely’s Press3 service
0:39:31Primary Challenge: massive traffic spike from a viral post
0:42:08Arman: “That kind of load spike is a daily thing for us”
0:44:20This is precisely why Pagely gives every customer its own dedicated VPS
0:46:03The other fix involved porting UTM manipulation code from PHP to JS
0:49:18Recap of the primary challenge
0:50:56How to engage Devrix if you’re interested in working with Mario’s team
  • Obsev – The client site we discuss this episode.
  • Devrix – the Pagely partner working with Obsev.
  • NGINX – the open source web server Pagely uses by default in place of Apache
  • Thumbor – the open source thumbnail generation library that is at the heart of Pagely’s PressThumb service
  • Amazon S3 – the cloud storage service that currently serves Obsev’s media.
  • WP CLI – the command line interface for WordPress

Transcript

Sean Tierney: 00:00 So we went from a 36 core machine, which is almost $6,000 a month. And with these tweaks that we’re talking about, we’re able to get it down to a four core machines, which is $1,000 a month,Raymond Attipa: 00:11 month of November was around 30 million pages. And like you said, there’s times where out of nowhere we get 2 million pages of that. You know, a lot of issues that were happening. I don’t even know if he had the right size servers that were load it up.

Mario Peshev: 00:28 So version control and the ability to run the clients easily and manage assets, things that it was definitely needed

Arman Zakaryan 00:36 At that point you’re making full use of the CDN, you’re getting access to the a hundred plus applications Amazon cloudfront provides. And now as you mentioned, uh, we had the time and the opportunity to build a platform into something that’s slowly, I think almost six times faster performs really well if it performs better than if you were to try to do this image

Arman Zakaryan: 01:00 It shows you the benefits of caching really clearly is running at light speed now. You know, that’s kind of like a guilty pleasure for me to see how fast we can load load a site because the faster the better the audience reacts to it.

Sean Tierney: 01:24 Okay. All right. We’ll get started now. Thank you for everyone who’s tuning in. This is the first of its kind of very excited. This is our first video case study. We’re doing a with a client of ours that’s observed and a partner of ours as well. Debrick’s. Uh, we also have our director of posting ops on the line. So we’ll do a quick round of interest here in a second. But I just wanted to kind of tee this up and explain what we’re doing here. So this is our first attempt to really get behind the scenes and unveil some of the complex technical challenges that we solve for clients, uh, in conjunction with our partners. And so, you know, a lot of times when we’re talking to Pagely prospects, uh, we can say, you know, we’re unlike other hosts and that we can do very sophisticated things and really drill into coding issues and Dev ops server issues and such a, but this is really, we want to walk the walk here and really actually exposed some of that and observe has graciously agreed to be the Guinea pig here and uh, kind of help talk through some of this.

Sean Tierney: 02:23 So thank you Raymond for joining us. Maybe you can introduce yourself first.

Raymond Attipa: 02:28 Thanks for having me. Uh, I’m Raymond the teeth Attipa on the CEO of upset of studios. We run upset.com and along with the new site we are just about to launch, what’s that.com awesome. Cool. And Mario have youtube channels and that’s the other side of the business. But you guys are doing

Sean Tierney: 02:49 a lot of traffic so I can’t wait to dig into some of that.

Raymond Attipa: 02:52 Definitely.

Sean Tierney: 02:53 And Mario is a partner of ours. Mario, can you introduce yourself?

Sean Tierney: 02:58 CEO of Debrick’s,

Mario Peshev: 03:00 We do provide some excellence on the wordpress and we’re publishers, startups. And some enterprises as well. We are super happy to be working with ray on upset. And what’s that? And looking forward to growing the traffic immensely. Probably at some point of time, maybe hitting a billion. What would you say ray?

Sean Tierney: 03:24 Okay, there you go. Awesome. And from our side, we also have our director of hosting ops, uh, Armando, Armando to introduce yourself.

Arman Zakaryan: 03:33 Hey, I’m Arman Zakaryan. I’m the director of hosting operations. Occasionally I head up the Dev ops team and I’m pretty much involved in a lot of the day to day stuff. Uh, when we have new clients like offset coming on, we generally have to pay close attention to what their trouble points are with the current solution they’re moving from. And so my role is really getting everybody on the same page, uh, tapping the resources on the devops team to do the server tuning and enabling various features, doing kind of that heavy lifting stuff, uh, as well as working with support on making sure that, that the client, uh, that they know how the client needs to be treated. Um, I need special things that we’ve done for them. We make sure that it’s all documented on our side and so that the whole team is able to help them instead of having to go to, uh, you know, a Dev ops person or to me every time. So it’s really just about getting, uh, getting everybody on the same page, getting everything that we’ve done, customize for them in a manageable way, using ansible, using our areas, gateway custom rules engine. Uh, so yeah, it’s really happy to be finally meeting you. Raymond. Nice to meet you as well.

Sean Tierney: 04:59 It’s all been over tickets for everyone watching. So this is actually our first time with a face to face, which is interesting.

Arman Zakaryan: 05:05 Cool. Well

Sean Tierney: 05:06 teaser here. Uh, before we drill into, I think what I’d like to do is step through the series of challenges and we can kind of drill in on each one. Um, but the teaser is, uh, or I guess the crux of the main challenge was you guys have a new site. You’re doing a substantial amount of traffic and it’s prone to viral spikes. You guys, the nature of your traffic is such that you get like snapchat and these other, uh, uh, you know, social media channels that drive massive amounts of spiky traffic. And so, uh, that type of setup really needs to have a good caching system enabled. And so we found that, uh, due to a coding, a thing that we discovered, we were able to get you from a server that, you know, needed to be, it was our, one of our larger enterprise servers, I think it was like a six grand a month spend Bob during one of massive spikes, but with some fancy footwork and some tweaking of the code, we were able to get that down to $1,000 a month server.

Sean Tierney: 06:03 Right? So that’s like a, uh, a sixth of the cost through making that change. But, so that’s the teaser. We’ll hit that one last. There was a series of challenges leading up to that. So I figured, let’s start with, um, when we set you guys up, just kind of chronologically walked through these. So you guys started with us, I believe in November, a couple months back. And, uh, I guess, Mario, this would be a question for you. I know the first thing we did was to set up a, uh, a gift based, you know, a source control workflow for them. Uh, what was the need that prompted that? Why do you, why do you like to use, get why source control and white get and why that deploy Bot set up?

Mario Peshev: 06:41 Yes. Previous version of the website that was set by the previous agency, uh, wasn’t quite high idea. It was based on a sandwich and it wasn’t really quite intuitive when it comes to deployments. Working with different teams as challenging curriculum with multiple people and multiple branches was really something that wasn’t at that were established. So in order to solve that and in order to move things to the next level, essentially we moved entirely to get, we separated all concerns that database, the code base, everything kind of moving into different pieces, uh, assets. We’re already moving into separate direction and I believe that our money is going to, uh, share similar challenges with assets a little bit later. Uh, so doing that, we chose a part that was more resilient, that was more flexible and easier to maintain over time. Uh, which is why of course we recommend that you guys, since we are also hosting other publishers, some serving hundreds of millions of page views a month and we know that you can scale and we know that you can deliver.

Mario Peshev: 07:43 And due to the nature of the complex, the complex nature of Ray’s business having a kind of right now having an upset, now we are working on a separate multisite with kind of different branches. We really needed an infrastructure that can host all of that and be able to create and spin up different staging environments even for multisite and mob domains together whenever we need it. And uh, even move some of the site’s important to the [inaudible] site in order to make all that work. So version control and the ability to run deployments easily and manage assets easy. It was definitely, definitely needed. And we simply knew that you can deliver

Sean Tierney: 08:25 awesome. And our mind from our side, what was involved in that? Was that a pretty straightforward process or was this a unique configuration that we’ve not seen before? Or can you talk about, uh, you know how this was in terms of, uh,

Arman Zakaryan: 08:39 yeah, so we make it pretty easy. If you’re using something like the boy bought, then you need ssh access. So we have our atomic control panel, which supports collaborators. You can just invite, uh, an email address to your account and you say what role they should have. And then you upload an ssh key. Once we upload your public key there, then you tie that in and on the other end on deploy Bot and off you go. Like, you don’t have to talk to support to set up your ssh keys. That’s all automated on our side. We generally try to do that with, with things that are very common. You know, our support team is always there. They’re always helpful. They can, they can catch things if something’s not working right or something’s not automated yet. But we always try to, to delegate that to be self service so that you don’t have to wait for a response at all.

Arman Zakaryan: 09:32 It just happens. I’m on the topic of yet, there’s actually maybe five or six different ways you can use get on page Pagely we support the most basic way. Obviously we says just making a get repository on your bps, which lets you set up posts, receive hugs, you can run build scripts from there. You can, uh, deploy that once it’s a successful to your website. We support web hooks with any generic system, but mainly get up and bitbucket is what our customers do. Use accident iation system that can, uh, initiated a script to go pull your data from get hub or from bitbucket or wherever. Uh, what that basically means is once you take an action on get hub, you merge a branch or you tag your release. That is what invokes that whole process to begin. It’s not running, get push on your, on your computer to the server off occasionally. Um, and then we also support, uh, deep web routes. So if you want to have a current sim link to what you were released should be and then you can use that as the way to build a release and that have an easy way to rollback to previous code base, we support that too. That’s just a few examples, uh, of the flexibility that we have on page link.

Sean Tierney: 10:57 Awesome. Cool. All right, well, so that was, that was really like the first milestone. I feel like in looking through an interest to let everyone know, I went through and actually looked at the 30 open tickets that uh, you know, 30 all time existing tickets, read every exchange and I’m really trying to extract out what I think we’re kind of the key challenges in milestones here. So that seemed like the first one. Um, and that’s great. Let’s talk about the second thing and actually, Raymond, are you willing to share what your traffic was in November when this all started or are we,

Raymond Attipa: 11:31 um, sure. Me, I think it’s, it’s a little tricky to give you a very accurate number because the way we were buying audience from snapchat was completely throwing ga off. He was counting pages, a sessions. So then that of throws it all loss.

Sean Tierney: 11:55 But I can give you a rough estimate.

Sean Tierney: 11:57 MMM.

Sean Tierney: 12:04 The month of November, yeah. Month of November was around 30 million pages.

Sean Tierney: 12:14 I mean that’s a substantial amount of tropic. Um, so, so doing a good amount of traffic, you guys have a pretty neat site the way it does like the infinite scroll and it’s a new site for people listening. You could go to observe.com it’s Ob s e v.com and see what we’re talking about here. Um, it seemed like the next challenge and looking through the tickets on a early December, there was an issue with the infinite scroll. Then I’m hoping we can talk a little bit about, and it’s just, it seems like due to the way that that worked, uh, it was kind of consuming unnecessary server bandwidth. And so maybe Armaun or Mario, if one of you guys want to kind of briefly state what that issue was.

Mario Peshev: 12:55 Sure. Did you want to go? No, I was just going to discuss the business semantics for a moment and then you can explain kind of a technical implementation basically from, from our standpoint, because we are delivering traffic for, uh, all sorts of different social networks and kind of different traffic resellers so to speak. Uh, we do deliver different layouts depending on whether we are serving traffic, say hay’s book or Outbrain or snapchat or whoever it is. And in order to accommodate that and in order to make sure that we serve the proper amount of, uh, data in the most optimal layout, we have different templates in the wordpress. And so, uh, we hide some masks, we have some stickies, we have some videos here and there. It’s, it’s fairly dynamic. Uh, and in order to do that, we are using UTM parameters simply because throughout UTM parameters we can define whether someone’s coming again from Facebook or snapchat for somewhere else. And normally in order to handle that, we do want to maximize the potential of cashing in order to save several resource and in order to reduce the, the server cost. And we had to work closely with harmonic on actually finding the most optimal manner, two sort of different layouts whenever needed and still retain the vast majority of the cash. Uh, so that we can bring the,

Mario Peshev: 14:17 the costs down from 6,000 to 1000 twitch. Uh, Sean already mentioned, so man, do you want to tell me more about the technical semantics with this one?

Sean Tierney: 14:25 Well, actually let me just quick to clarify. For the people listening, if they’re not familiar with what a UTM parameter is, this is something in the query string that you identify. It’s different for every channel inbound channel that you have to the site. And so it’s a way of identifying where this traffic came from and you guys are actually serving up a different layout to those channels based on where they came from, correct? Yeah. Yeah. Okay. So

Arman Zakaryan: 14:52 if you just go and put together your own hosting stack, let’s say you, you pick up easy engine and you’ve got your wordpress environment up and running, uh, and you put some engine x on it and now you got caching. The Url that you view the site through is your cache key. And so while a UTM parameters let you do stuff like, like what Mario described to be able to serve optimized pages depending on, on where the traffic is coming from, it’s really detrimental to caching. Um, by default will serve that. It will consider every single one of those pages as unique, uh, when really you might just need a couple of things to be a differentiating factor, like the UTM source or the term or whatever. Uh, so on, occasionally by default we strip out those UTM parameters from the cash team, but if you need to white list any specific values and it doesn’t just have to be a all or nothing, we can do specific UTM source UTM term or a combination of all those.

Arman Zakaryan: 16:08 And so that’s, that’s baked into our gateway. So it’s really easy for us to adjust it. We don’t have to do any crazy stuff. Um, even if you’re, even if you’re on, um, you know, a entry level plan, even if you’re on our new one 99 verse one, you can do that kind of stuff. Keeping in mind, you know, obviously there’s performance impacts to that, but you get that. Do you get that same feature set across the whole line? So, you know, for, for Raymond, uh, he doesn’t have to really be concerned of like, Oh, if I am in a, if I’m in a slow season, this is not what I’m getting all my traffic. I want to scale down my, my plan to a lower plan that can handle the lower traffic. I don’t have to worry about losing these features that, that page is set up for me. Like those features are available across the whole product line for dps. Um, so about the CDN stuff,

Arman Zakaryan: 17:04 the way our press CDN feature works is with a wordpress plugin that will rewrite the URLs for your static assets to go through an Amazon cloudfront distribution that we, we can figure for you and the URL gets rewritten automatically for any asset on that initial page load. We just do that for you. Uh, you don’t really have to do anything unless you’re doing some, uh, additional assets loading through like Ajax load, more infinite scroll. And I think that’s, uh, that’s what we want to cover in this, in this part of the talk is the challenge that that presented and how Debrick’s was able to figure it. I get it, uh, get that working properly. And really it’s just a matter of getting the right URL, the right domain name to be used when you’re, when you’re loading more assets. And then at that point, you know, you’re making full use of the CDN, you’re getting access to the hundred plus applications Amazon cloudfront provides.

Arman Zakaryan: 18:11 And, uh, it performs a lot better for the static assets to load somewhere club for somewhere closer to where the visitors coming from rather than having to go all the way to the origin server, which, you know, if they’re on the east coast and if your service in Virginia, it’s going to be fine if they’re coming from the west coast, my view old choppier if they’re in Europe it could be even slower. Uh, so it’s really important to get the CDN to load as many assets as possible, uh, so that you can get the best performance.

Raymond Attipa: 18:44 Nice. And so initially the CDN was not working properly just because of the way the infinite scroll work or at least that was my take on it. And so what was the, I guess, what was the resolution there to making that work properly?

Arman Zakaryan: 18:59 It was just being aware of that, uh, the way that the page really platform works and really how, unless you’re doing full site acceleration where you have your entire website behind the CDN, you have to do this kind of stuff where you, you have a different slightly different URL for your, for your images which slice CDN dot [inaudible] dot com and um, it’s really just being aware of that factor and, and making the necessary adjustments to the, to the site code, uh, changing some stuff in the beam so that it’ll put it, we’ll put the CD on URL instead of the main site. You are all, so just to be player out of the box, you, you get those rewrites. It’s only if you’re doing any special stuff, uh, to make that uh, they asked to actually make a few adjustments, but I render time when we’re actually rendering the html before we send it back to the, to the visitor. We’re did do that on the initial load. Um, and, and we are looking into ways to, to make that just work, you know, regardless of if you’re using a finished or not. So that, that we definitely understand that that’s an issue that, uh, some people run into and, uh, you know, we’ll work with Debrick’s, figure out what their solution was and then see what we can bake right into, into our press CDM products and make that less of a headache in the future.

Raymond Attipa: 20:24 I also love to add that, uh, a lot of the challenges we face during this migration was not from Debrick’s is called base at all. It was from a previous developer build test and we move to page leads to be able to pretty much, we’re at a little bit easier with a Devic stumble one, you know, that was, it was, it was a suggestion. However, you know, we, we saw everything that we weren’t able to be offered from Pagely as far as optimizing, um, and using less hardware to, to get to where we needed to. Um, and you guys will see as the, as Mario and his team are building up the new site, which will be transferring all the content over into, um, is running at lightening speed. Now, you know, because a lot of the stuff that you see, so obsessed itself was comprised of over 12 years of content.

Raymond Attipa: 21:24 Um, you know, it was about a handful of sites that merged into one and we was called upset and we faced a lot of challenges, you know, with over 200,000 images that needed to be, you know, important to the new, uh, was it press CDN or so, um, now press CDN, the moment hashing. Yeah. Then that was the one, you know, that was a challenge because things were starting to break, but a lot of the current site you see has a lot of bandaids on it. Um, and that’s why we’ve, we’d faced these issues, but I’m pretty certain that when we complete new build that we’re, we’ll be launching shortly. We won’t have any of these issues anymore.

Mario Peshev: 22:07 Yeah. I appreciate you, I appreciate you pointing this out. Basically what one of their most common challenges we do face whenever we’re kind of starting out in a high traffic website like yours, uh, essentially is as you said, lots of uh, bandaids being pulled here and there, here and there for various reasons, different heavy core games, just interfering with the lifecycle of the application and kind of colliding with the rest of the, the workload. So, uh, essentially what we tried to do here is make things I am going to see bearable for, you know, a few months just trying to patch whatever we can, uh, make true Q4 essentially make the best out of the Harris track with gear in terms of, you know, black Friday, cyber Monday, Christmas and everything else in between and having all that in place. Now, as you mentioned, we had the time and the opportunity to rebuild the platform into something that’s floating, I think almost six times faster or sell. Uh, and as you pointed out, moving all the content to the new platform, which is entirely designed to handle high traffic publishing websites. I was going to ask you, so where were you hosted prior to pages?

Raymond Attipa: 23:24 We had an Amazon AWS account directly. It was complete different mentality of doing this. You know, we had an origin URL, um, which was, had a cloudfront URL cash onto www dot [inaudible] dot com you know, so that barrier there was constantly facing issues of, of what’s being cached and being shown to the user and things that were cast such as javascript files and they weren’t being clear, you know, correctly. And it was causing a lot of issues on the front end. A lot of issues with were happening. You know, I don’t even know if we had the right size servers that were loaded up. It was just not done in an optimal way to actually be able to scale it comfortably. And yes, you know, when we would have large scales of traffic or you know, a large uptake in traffic come in, you know, things would start breaking and you know, and like you said, there’s times where out of nowhere we get 2 million pages in a day. You know, and that’s cause you know, we get a campaign that just lights on fire and you need to be prepared for those, you know, without somehow you need to be prepared with that. You need to be prepared for that programmatically more so than manually.

Sean Tierney: 24:47 Right. And so you mentioned that the site loads lightening fast and obviously from a visitor perspective that just translates to a better experience. And you know,

Raymond Attipa: 24:59 we ended up seeing Ipz procession, which technically, you know, in our business model we under bloating more, you know, adds to the user. You know, we don’t upset the user with lag time and as fast as the side is now, the newer site will be much faster. We’ve already seen that with the infrastructure that was built for the, what’s that site? And this is going into that same multisite platform.

Sean Tierney: 25:26 So can you maybe, or if you’re, if you’re willing to, can you quantify or give any kind of indication like business wise what that translates to in terms of like repeat business or you know, time on site or any of those cases

Raymond Attipa: 25:40 picture important to you. Trying to tell a story. It’s how you use it with a certain article piece that has 40 elements still is, um, you know, you don’t want that user tune out for any particular reason. Again, content is number one. You know, hooking them in with the right content. When you hit that person with the right content, you know, then the site itself needs to be performing optimally. You know, when you send 100,000 people to one piece of content that’s within an hour. So you know, the server, it takes a beating to it. Especially, you know, with these we have content pieces that have 40 images in there, you know, they got 40 images, you layer on all the javascript that we’re loading with all the ads that are loading, so on and so forth. It becomes a heavy strain on the server. So every millisecond, I mean every, every millisecond the site is performing, you have a better chance of that user going deeper into the article and you know, finding the actual hooks that they’re looking for.

Raymond Attipa: 26:46 So the faster the site loads, we see an automatic return on, um, on, on pretty much with the retention rate of the user. So Mario, Mario’s team built out a, uh, a quiz platform for us and you know, we work at a, a quiz we did on a nostalgic quiz. The quiz itself reacts in extremely fast time as far as by the time you entered a question to the time you go to the next question, we saw an 87% retention rate on that, which means 87% of the users actually got to the end of the quiz. Um, again, so, you know, it’s, it’s part content, part of technology. You know, they both have to kind of work together. Um, you know, if we have for technology, for example, and user users lagging, you know, four or five seconds between a question and an answer, you know, chances are that user’s going to drop off 40%, uh, once they’re about 40% of them to deploy as rather than 100% or 87%.

Sean Tierney: 27:55 Right? You’ve got to have both those elements. They’re like, you’ve got to have compelling content, but you can’t be handicapped by your technology basically.

Raymond Attipa: 28:02 Exactly. And, and you know, the facet of facet of the site is loading. The higher, it’s exponentially hard how much more content that user will be consuming.

Sean Tierney: 28:14 That’s great. Well, so let’s talk or go ahead.

Mario Peshev: 28:18 Yeah, sorry Sean. Just to add on top of what they said, uh, we also need to account for the fact that, uh, media users are for the most part, oftentimes 80 90% of the users are using mobile and they’re also using social media. And by social media, I mean that they’re actually using mobile applications like Facebook’s in our browser, snapchat’s in our browser, which are extremely heavy applications. And whenever you have longer stories, because storytelling is extremely important and have stories are incredible and extremely engaging, it’s, it’s a pity whenever a meteor device with a heavy, say snapchat browser on a mobile device with six or seven different types of messengers, it simply crashing five, six, seven headlines into the article itself. So being able to provide faster response times and you know, less delays both on a client side and the server side, it’s simply paramount amount in order to get people through the entire journey.

Sean Tierney: 29:22 Cool. Also, I want to talk about a different facet of how speeding things up worked. So CDN was one way that that we were able to make that happen. Um, but let’s also talk about the press on thing with the image optimization. So my understanding was there was some, uh, issues, you know, you had this massive number images that were brought over into s three a but they weren’t necessarily optimized for fast loading. And so Armand, maybe you can tackle this one and just talk a little bit about what press thumb is and what we did there.

Arman Zakaryan: 29:55 Sure. So Preston is page Lee’s automatic image optimization and thumbnail generation feature. You can get it for free on any of eps plan. Um, the way it works is using a underlying application called [inaudible], which is written in Python and combined with, uh, some custom gateway rules and an you plugin for wordpress. We rewrite the URLs to, to your static assets too, to be like dot Jpeg, dot. Optimal dot. JPG or dot jpeg. Dot. Webb t if you’re on chrome and when you request an image from that URL, it serves the most optimized, uh, image you can get without having to sacrifice quality. And the really, you know, there’s a lot of different plugins out there that do this. Uh, there’s a crack in their smush it, there’s all these services, some of them are cloud backed a w and then many of those require you to, to to do all that image processing up front.

Arman Zakaryan: 31:10 The big difference with Preston is that we do that processing when you request that image on the fly and if you are requesting and managed by a different dimension, we can do dynamic thumbnails as well. And so what that does, it saves you from having to spend so much disk space, having a generate all of those different dimensions that you need upfront. And it saves you from the server load upfront of having to go and create those, uh, optimized versions of those images as well. Uh, so we, we just do that all on the fly as it’s requested and once it’s hit, once it caches, just like any other normal image that goes to the CDN, uh, for the global distribution and, uh, overall helps you save on your bandwidth costs and your storage costs. Obviously it’s, it’s a dynamic service that runs on servers. So that takes a little bit of resources. Uh, but you know, we have, we asked some very large clients, uh, and also this was one of those like high traffic sites that are using this, like leveraging it a lot. Um, and it performs really well. It performs better than if you were to try to do this image optimization and PHP and it keeps your PHP workers free to handle other stuff up.

Arman Zakaryan: 32:58 It was not using the stems actually using PHP function is that we disable on Peasley for security reasons, uh, so it actually would not even work in our heart and environment. And so that was actually the first thing that we looked at when we were onboarding, uh, before even the migration could be finalized before we could even switch over the site to be hosted here. Uh, we helped to get that kind of get that thing out of the picture and get pressed them in place. And you know, the, the end result is instead of doing this, uh, kind of, kind of crazy way of doing it with some plugin that’s not really support it anymore or made by some guy you don’t do business with anymore. You know, it’s, it’s a very simple, straight forward a plugin that we install that just rewrites the URLs and the heavy lifting is being done by end genetics and Aries and, and by thumber. And so that, that that overall helps, uh, with your, with your website’s performance cause your, your website is your piece be workers are serving the website instead of trying to do all this work on images.

Sean Tierney: 34:11 And was there anything else? I think I read at some point that there was some code that was calling larger than necessary images, sizes that were like getting auto constrained down by the theme. Was that another thing? Maybe you can mention that.

Arman Zakaryan: 34:24 Yeah. So that’s, that’s one of the big things that that press some help with, uh, because you don’t have to generate all the images that you need up front for the different sizes. Uh, you can just append the dimension that you want to the original image URL. So it’s like, you know, offset.com/ image.dot jpeg and you want a one 50 by one 50 version of that. You just call it an image dash one 51 50. Dot. Jpeg. You don’t, you don’t call image dot jpeg and then, uh, change the size of it and your html code to fit to fit that little part of the site. You can just call it by the size you want and the surgeon will render that image by that size so you can, you can fit it into that, that space of that scale it down. And so that brutal it’s been approach to, yeah, that helps a lot with clients side because you don’t have to download a large image. And it also helps, uh, just for browser performance cause it’s not scaling down a large image to fit inside that window. It’s uh, it’s just putting the image at the native size that, that he got from the server.

Sean Tierney: 35:38 Cool. Alright, so moving on, I’m looking at the next challenge here. Uh, so media was previously served from s three, but require that WP plugin, is this related or is this actually like cutting down on a PHP worker thing? Is that a separate challenge or is that part of what you’re just talking about?

Arman Zakaryan: 35:58 Is there are two different things. Um, so if you, if you just keep lying images on your server, then you have to keep resizing your disc on your server as you grow. Right. Because you don’t, you don’t want to allocate a terabyte of space off the bat because you’re going to be paying through space. You’re not using. Um, and then you have to monitor that and keep, keep your sizing it. Uh, so this is more press three, uh, adds value because we have a system where it will automatically shuffle older images on test three. And combined with custom gateway rules, we have it so that it can proxy that asset from s three, uh, with the same URL that you normally would have. Your image if it’s on disk and s three, obviously there’s really no limit to how big your sp bucket can be.

Arman Zakaryan: 36:57 You can just keep putting stuff in there and you don’t have to worry about resizing the desk. It’s obviously a lot cheaper to put stuff on s three then on, on EBS volumes. Uh, so it’s just a lot easier for a site that’s, that’s growing fast, adding a lot of new content, a lot of large images. Um, you know, they don’t really have to worry about having to purchase more disk space all the time. Uh, and actually with press three, we, uh, we, we just have a ray have an s three bucket on his own Amazon account. We don’t, we don’t mark up that price at all. Um, it’s just bring your own bucket and you pay Amazon at costs for that storage versus the, you know, what we charge is 10 Gig, 10 gigabytes is $15 a month because it’s, it’s on EBS and we add redundancy and we back it up and all this stuff.

Arman Zakaryan: 37:54 Uh, so the, the press three a and the press thumb together is, it was a really big win because you get your images optimized automatically. You get dynamic thumbnails so you don’t have to have as much up front space being used to, to make all those thumbnails. And then as those images age, they get shuffled over to s three, and that keeps your server costs low. Um, and just the pit, the Paisley magic part of it is, regardless of where that file lives, it’s all accessible or the same URL. Uh, so you don’t have to do any fancy stuff, uh, to decide like, oh, well this image is on s three, so let me render this, this, this one’s local. Let me run into that URL or have to, uh, put the images off to s three right away to, you know, to work around that problem. So that’s,

Sean Tierney: 38:47 it’s all transparent to the client to write your re your workflow didn’t change. This stuff was basically all transparent to you.

Raymond Attipa: 38:55 Yup.

Arman Zakaryan: 38:57 Yeah. So range just uploads his assets, his media until the library. Um, there’s nothing different as far as work is concerned. Uh, it, it’s, it just works the same way, same file structure, same everything. And, and we have, we have an out of band system that will, uh, handle the uploads and the proxying wordpress and a PHP workers don’t, don’t get bogged down trying to do any of that work.

Sean Tierney: 39:26 Cool.

Sean Tierney: 39:27 All right. Let’s move on to the kind of the last one. And this is the biggest win. So this is the one I’m excited to talk about, but on December 18th, Ray, what happened? You guys had some kind of posts that went extremely viral that day and you were just getting crushed with traffic for my understanding.

Raymond Attipa: 39:44 Yeah. Yeah. That one was a,

Raymond Attipa: 39:48 that was a nice day as traffic. We have 4 million ease and just came in a matter of, I don’t know, five, six hours or so. And the issue there, I think I’m not mistaken, was the UTM parameters not being white listed and that was causing multiple variations of the page to try to get cached, umW, and which was hitting the service extremely hard. So I think temporarily, again, you guys jump in, if I’m wrong, then I think we temporarily had to, you don’t drive up the, the size of the server. And then once the UTM parameters that we needed white lists of a white listed, we saw it, we saw the performance of this though worthy. The CPU usage dropped tremendously once they were white listed instead of constant trying to, you know, cash you pages. So it was the same page trying to get cashed out. 1 million isn’t always because all the ids that were being passed and I, and once the white list happening, we saw it decline. We’re able to spin on the server within a day or so back to back to normal levels again.

Sean Tierney: 41:06 So this is custom cash keys I believe, or Amman, is that what we did in that scenario?

Arman Zakaryan: 41:13 Yeah, we had, we actually make a few extra adjustments to the Utm, a white listing and so that just to optimize and make, make sure it was only a burying on the parameters that had to be different, just, you know, so, so you could have the right page to render or that you could get the right sort of, uh, accounting for your affiliate traffic so you can know, uh, where, where your traffic came from, but also displayed the right stuff. Uh, so this is kind of going back to like if you’re doing it on your own and you’d have an out of the box kind of set up and put it together, uh, and then you get a bunch of this traffic that’s, that has all this different URLs or different query strings to the end of the URL. The same, that same exact thing.

Arman Zakaryan: 41:58 It’s going to happen unless you have, unless you have something to optimize that on the, on the gateway level. So, and you know, that that load spike that, that race talking about, you know, that kind of thing happens to, to our customers every day. It’s like a daily thing for us. It’s just normal, you know. So we have, we have automation to take your ECT server, uh, from four cores up to, you know, I think like 96 cores is the most we can do on a single box. It’s, we’re really just limited by what Amazon offers, you know, that’s, that’s, that’s a whole, it is really, we’re, we’re basically like your Amazon provost. So, um, yeah, like it shows you the benefits of caching really clearly because you can, you can go down from a 36 core box if you have good cash name down back to four cores or eight cores because you don’t have all that, all that dynamic workload going to PHP.

Arman Zakaryan: 42:58 It’s just engine next caching, uh, everything. And, uh, you know, the really cool thing, even if you have a fast performing code, even if your dynamic PHP is as quick and efficient, it’s always going to vary a little bit. And how well it can perform. Like, sometimes it might be 300 milliseconds, so not sometimes it might be 600, you know, just because it is a dynamic call. Uh, but when you have, when you have some caching like engine acts or varnish in front of it, it’s always consistent. You’re always going to get the same performance and it’s always going to be orders of magnitude higher than what you can get with the dynamic, uh, requests. Even on a vps one, if your content is really cashing well, if you’re only hitting cache content, you can do 300 requests a second, a second. I mean, that’s, that’s pretty high, you know, um, that’s more than most people see.

Arman Zakaryan: 43:56 Even even if you’re driving a lot of traffic to your site, it’s really hard to hit 300 requests per second. Um, so yeah, it’s, uh, the big thing is like these types of things are not special edge cases for us. Like we’ve, we’ve built our entire system to accommodate customization. Every single customer, that’s all the vps. Customer occasionally has their own instance on ECT that have their own dedicated resources for CPU and memory. So, you know, just to avoid like adding insult to injury, like, Hey, your congratulations, the conversation becomes congratulations on your success. You have a lot of traffic, you guys are gaining popularity and let’s help you, let’s help you solve a couple of load issues and make sure your site’s running faster rather than if you’re on shared hosting, uh, Hey, uh, you’re the bad neighbor and we have to shut you down. You know, it’s a totally different conversation because of the way our product was positioned.

Sean Tierney: 44:57 Well, and it’s precisely at the time where, you know, you’ve done the hard work to generate that viral traffic spike. And so it’s a shame if at the peak of that when you’re getting the most eyeballs we’re talking about now is when you go down and you’re losing out on the most traffic. So it’s got to be particularly painful ray, I would imagine, to be in that position of like having the Unicorn viral spike there. You know, just go crazy and then all of a sudden we’re down.

Raymond Attipa: 45:24 And that’s two fold issue because a, you’re, you’re buying audience where there’s landing knowers. So it’s an automatic loss right there. And two is that the social, social media, the social networks and the native networks, they all have their bots that are automatically checking the ads and you know, pretty much in real time and once they can’t hit the site and I had is gone and it’s uh, you know, sometimes two weeks to get an ad to go, you know, tend to get that successful and two weeks of work just goes down the toilet within seconds. Okay.

Sean Tierney: 45:57 Yeah. So double damage there. Yep. And was that the only thing it took to resolve that Armaun or Mario? Maybe one of you, cause I could swear I read in one of the tickets that we advised, uh, whatever code we had been written in PHP for manipulating the UTM stuff. I think I read somewhere that we advise moving that to javascript and that maybe that played some part.

Arman Zakaryan: 46:18 Yes, that’s a big thing. Um, obviously you can, you can try to fight those UTM parameters to be parsed by PHP. Um, and if you, if you need that to be handled by each, be on every request, caching is going to get in the way of that. Uh, so you can, you can make a couple tweaks where that that’s red on the Java javascript side on your client’s side instead and it makes a big difference. Um, obviously, you know, we can let it all bypass cash and go to PHP, but then then you’re back to needing 36 or [inaudible] 96 or our mini course to handle it. Uh, so yeah, I mean that, that was one of the things that one of our senior Dev ops engineers help with, uh, is advising on how to, how to do this using Java script instead of relying on PHP. And, um, I think he even provided some working examples of how to do that. Uh, so that, so that the developer could, could actually go and make that, make that change. Uh, you know, to the real code base.

Sean Tierney: 47:27 And Mario, I think you guys had had the ones that had executed that fixed in terms of switching into javascript, is there anything that you want to add on that?

Mario Peshev: 47:35 Yeah, so we, we have initially started with the [inaudible] solution and from day one before we move to page eight a and we face tons of different race conditions being implemented on the gels Catan. And since the entire advertising model revolves around chatting with Google’s DFP and rendering that invoking header bidding in a bunch of other things that happen behind the scenes, every single risk condition, every single script that may be late or not load it on time or anything like that may completely crashed the entire user experience. And due to the lack of reliability in the previous platform, we simply couldn’t afford to make any guesses. Because otherwise with a successful campaign, like let’s say the one in San Bar with millions of page views over the course of several hours, uh, we may as well have otherwise lost, let’s say 30% of the traffic due to a slower, low times.

Mario Peshev: 48:35 And you know, scripts loading hectically and various order acts. Not showing up and potentially maybe even getting some violations, like the one that reset, like you may, your art may get banned. Google may, you know, start knocking on your door and say, hey, you’re a kind of claiming the tourists playing cards, but those ads are showing users what are you doing? Can, you know, it’s getting a very complicated case and you know, being catered to reliably implement the same functionality with javascript on your infrastructure. Uh, and actually truth, that functionality is something that helped us immensely to reduce the workload as well.

Sean Tierney: 49:12 Awesome. Cool. We’ll fill it. So just to recap, in terms of quantifying what that represents for the folks listening. So we, it was a 36 core arm on, I believe that was what it was running up and three. So we went from a 36 core machine, which is almost $6,000 a month. And with these tweaks that we’re talking about, we’re able to get it down to a four core machine, which is $1,000 a month. Um, so that’s a significant savings. It also means it’s more reliable because it is being served from cash, uh, that it’s just dramatically more immune to this type of unreliability. So, um, great job. I mean, that’s an incredible when I think we don’t need to take it much further than that. This is a, this is a good, good lesson here. Um, Ray, is there anything else that you’d like to add at any parting thoughts are in terms of your experience with both Debrick’s and Pagely?

Raymond Attipa: 50:05 No, I mean other than it has been great so far. No, no. Our spent over a month or so on and the whole, uh, you know, we’re bringing on this site that was so complex and trying to make it as simple as possible to, they’re pretty much on board into Pagely and we successfully got it here. I’m really looking forward to the launch of the multisite with Debrick’s and to see how fast we can load everything. You know, um, that’s kind of like a guilty pleasure for me to see how fast we can load load a site because the faster the better the audience reacts to it. So looking forward to the new bill and a should be to be out shortly.

Sean Tierney: 50:49 Awesome. Hey Mario, I want to give you just a quick chance for a commercial because you are one of our trusted valuable partners. How do the people listening, if they wanted to work with you as a partner, how do they go about contacting you?

Mario Peshev: 51:02 Thanks John. It’s appreciated. Everyone who would be willing to share the pleasure of being hosted with Pagely can find us at [inaudible] dot com we do help. Like I said, oh tender prices and publishers serving tens and hundreds of millions of page views. Uh, I believe all of the success stories are hosted. Do you guys, we know that you can do it and we know that he can, you know, prove it and essentially, yeah. [inaudible] dot com the VR I x.com and epic chatty.

Sean Tierney: 51:35 Awesome. And He, Mario is also listed on our partner’s page. It just pagely.com/partners. You’ll find them there as well. Yup. Cool. All right, well we’ll wrap it up there. Break and congrats on your continued growth. Uh, we definitely hope to be there when you hit the billion page views.

Raymond Attipa: 51:50 Mark Thomas. Cool. All right, everybody. Thanks for your time. Thanks guys. Thank you.

New Posts in your inbox