The following is a transcript of a talk given at various events throughout 2016, including Smashing Conf NYC, and An Event Apart Chicago.

An empty, black slide.

I’d like to begin with an exercise in relaxation. As many of you know, I am the picture of mellowness—ol’ Namasté Marquis, they call me. So, close your laptops. Close ’em. No phones either. No… Google Glass or whatever.

An empty slide, overlaid by OSX app icons—Mail, Slack, Calendar, and Message, each with several notifications—as though appearing from the presenter’s dock.

Now, put your hands on the tables in front of you. Deep breaths. We’re no longer at work. Nothing can—… nothing can bother us now.

Haah. We’re… relaxing; we’re un— unwinding… and…

Okay no this is hell; this is my hell.

Screenshot of the speaker’s Google Calendar, with scant few times not marked as “busy”

I don’t actually remember how to relax. 90% of you are freaking out at those little red icons and the remaining 10% are too hungover to care—a stand up broke out in the back row, and I don’t even think those people knew each other.

I get it; I do. Events like this are the only things that get me out of meetings for a little while. Somewhere in between all those, I get to do a little bit of typing.

The Bocoup logo.

I work at Bocoup. Back when they hired me, Bocoup was mostly a JavaScript-engineer-y kind of shop; Angulars and Embers and Reacts and what-have-you. They didn’t specialize in responsive web design at the time—though it was a part of almost every project they worked on. Performance was always a concern, but it was more of a nice-to-have.

Screenshot of the Test262 GitHub repository landing page.

Now, I’m not putting Bocoup down. Bocoup was—and is—quite possibly the best in the JavaScript business. Bocoup didn’t write the book on JavaScript—Bocoup wrote the tests for the entire JavaScript language, in JavaScript.

A timeline from 2011 to 2017, with the approximate current day marked “you are here” and the Bocoup logo marking mid-2014.

I’ve been at Bocoup for a while now.

A timeline from 2008 to 2014, with the Filament Group logo marking mid-2011.

Before that, I worked at Filament Group, and you might’ve heard of them too—they’re all performance, all the time. They build incredibly fast websites; they’re hired to build fast, accessible responsive websites; that’s their whole thing. That was MY whole thing.

So I roll into Bocoup all full of myself, like “okay, so, exciting news: we make fast websites now. I am very good at this! I trust there will be no objections.” There weren’t any—hell, why would there be? But it didn’t actually happen. That’s my fault, and to a lesser extent—real talk—I kinda blame you too?

Photograph of a tech conference audience, taken from the speakers’ vantage point.

Credit: Eric Meyer

See, I’m spoiled. I’ve got it pretty easy up here, I mean, apart from the incredible terror. We’re all relaxed, in so much as any of us can relax anymore. We’re all ready to be inspired. We’re gonna leave here full of new ideas; we’re gonna leave this conference full of fire. Point is, I stand up here and pound my fist on the lectern and shout “we’re gonna make faster websites from now on,” I mean, fingers crossed: it’s gonna be like Braveheart up in here. Cheers and thunderous applause and people throwing flowers and whatnot, and we are gonna mean every single bit of it, from the bottoms of our hearts.

In a couple days, though, we’re going back to The Grind, and it gonna do its damnedest to chip away at us. The deadlines will be back. The pressure will be back. Nobody here is gonna disagree that a faster website is a better website; nobody at Bocoup disagreed about that either. But our priorities will change, because let’s face it: those priorities aren’t always ours to decide. When we go back to the real world, it won’t be all about us, no matter how fired up we are.

That’s how things played out at Bocoup, too. I had changes to make, but everyone already had enough to worry about.

Screenshot of Google’s “new calendar event” creation page, with the meeting cheerfully named “perf matters” and a description reading “who wants to talk about performance! I’m bringing fudge!”

But I—no—I had a PLAN. I was gonna send EMAILS. We were gonna have MEETINGS. It was time for Bocoup to get onboard the Wilto Train: we do PERFORMANCE now; this was HAPPENING.

Screenshot of a Jira ticket creation page, with the issue tersely labelled “Do Performance” and the description “if there’s time left over.”

I filed issues on other peoples’ projects! They were, I was certain, highly welcomed.

Screenshot of a GitHub issue page, with the name “do performance” in all caps, and a comment reading “I’m not gonna ask again.”

I filed more issues, because—weird—those other ones got lost, I guess?

Screenshot of Google’s “new calendar event” creation page, with the meeting named “performance” and the description “attendance is mandatory,” both in all caps.

I scheduled more meetings. Angry, lonesome meetings.

Cuchulain stirred,
Stared on the horses of the sea, and heard
The cars of battle and his own name cried;
And fought with the invulnerable tide.
—W. B. Yeats, Cuchulain’s Fight With the Sea

This went on for a couple months.

I’ll be honest: I didn’t make a lot of progress. Looking back on it now, I know why it never went anywhere: I was just adding to the noise. I was another email, another issue, another meeting invite on the inbox tire fire. “Make the website fast” wasn’t helpful—nobody knew where to start. “Come talk to me about how to make the website fast” wasn’t any better; everyone had enough work to do already without going looking for more.

A small illustrated icon of crossed wrenches, centered in an otherwise empty slide.

It wasn’t a matter of the code itself. Dealing with a thousand browsers across a hundred devices—that part is, in a relative sense, pretty easy. We’re all working on those solutions together; we’re sharing what we’ve learned works or doesn’t work. That’s why we’re here.

A small, illustrated icon of a red heart, centered in an otherwise empty slide.

Getting people to care about performance—y’know, that’s not hard either, honestly. You find your own reasons, if you haven’t already. Whether it’s about the pride you take in your craft, about empathizing with the users you’ve never met and likely will never meet, or about going home at the end of the day feeling like you’ve helped push the web in the right direction just a little bit—whatever your reasons, they’re not wrong.

We’re gonna talk about the reasons I care, and I hope that some of them resonate with you too. But getting your team to care—that’s not the hard part either.

Performance Under Pressure title slide

So, here’s the plan. If I’m doing my job right, you’re gonna walk out of here with a bunch of new tricks in your bags, and you’re going to leave here full of determination. Those are the easy parts.

The hard part is finding the tiny points of leverage to push back against the grind, against the emails, against the barrage of issues. The hard part is putting it all to use in small, unobtrusive ways, and finding ways to involve your teammates so performance is something everyone owns.

So, here’s how I did it, and maybe it’ll work for you. When we get back to work, we’re not gonna file any issues. You’re not going to walk into the office on Monday and call a meeting to talk about how you want to change things. We’re not gonna pick any fights, because we’re not gonna give somebody a chance to stop us.

Screenshot of the current homepage

I did it by signing up for everyone’s nightmare project. I volunteered to redo our own website.

And I started this process the way any professional adult would.

A tweet reading “I’m coming for you, @filamentgroup,” with an attached screenshot showing middling results for

By trash-talking my previous employer on Twitter.

It’s totally okay to brag about how fast your site loads.
—Scott Jehl, Smashing Conf NYC, 2014

Now, in my defense, I did this because talking trash was a long-standing tradition at Filament. Mainly, though, I wanted to get everyone fired up about performance; to show that I was willing to stake my name on it, and to show that this wasn’t just some ticket floating around in Jira somewhere. This is something we could have fun with.

It’s just a job. Grass grows, birds fly, waves pound the sand. I beat people up.
—Muhammad Ali, Greatest of All Time

And I am intensely competitive, yeah. Honestly, though? Making websites is just my job. I’m not passionate about typing semicolons eight hours a day, as scandalous as that might be to say.

A photograph of the Bocoup Open Device Lab: a hand-made hardwood display case, with four shelves full of mobile devices and tablets.

What I do care about—what I care deeply about—is building something I can be proud of. The work itself; that’s just a means to an end.

Screenshot of the linked post, with the following lines highlighted: “Traffic jumped to 11 million uniques in July, the first full month of the relaunch, from 6 million in June, per the site” and “…the interaction rate on ads rose 108 percent.”

GQ cut its load time by 80%… traffic and revenue jump:
Ilya Grigorik

There are no shortage of business cases to be made for more performant websites—and no matter how strongly we feel about building a better web for its own sake, we’re gonna have to be ready to make those cases.

If it’s inaccessible to the poor it’s neither radical nor revolutionary.
—Source unknown

For me, it’s about building something for real people. I don’t get excited about frameworks or languages—I get excited about potential; about playing my part in building a more inclusive web.

I care about making something that works well for someone that has only ever known the web by way of a five-year-old Android device, because that’s what they have—someone who might feel like they’re being left behind by the web a little more every day. I want to build something better for them.

13% of Americans own a smartphone but don’t have home broadband—up from 8% in 2013.
PewInternet Research

That’s a lot of lofty talk around making websites, I know. We just type for a living; I used to say that all the time. “I just make websites.”

But nothing is ever neutral—not technology, not the tiny, least consequential-seeming development decisions we make during the course of an average, boring workday.

A full 21% of adults with under $20,000 in yearly income have access to a smartphone, but no broadband connection in their home. Likewise for eighteen percent of adults with a high school degree or less—I’m in that group, by the way. A lucky fluke.

Among Americans who have looked for work in the last two years, 79% utilized online resources in their most recent job search and 34% say these online resources were the most important tool available to them.
PewInternet Research

Now, very few of us are likely to have built a job-seeking website. But maybe we did build the site a user visited the day before they lost their job; the one that drained their pre-paid data plan. Maybe it was something we could justify: it was a site about art, so we could get away with huge filesizes for images. It was a shopping site, so we figured nobody would be using it on their phone. It was a site for games, a luxury—but maybe that user gave their phone to their kid because they needed a little peace and quiet on the day they lost their job—a little time to think about what to do next.

51% of smartphone-dependent Americans frequently (or at least occasionally) reach their max monthly data allowance.
PewInternet Research

The tiny decisions we designers and developers make day-to-day—to use a technology, to use a framework, to support or not support a browser, to use a PNG instead of a JPEG—those give us a kind of power we might not even be aware of.

We have power over access, each and every one of us. A kind of decision-making power that the people using the web don’t have.

TIL @opera is bigger, by user count, than @twitter #wowzer #DOM15
James Finlayson

This May, Opera Mini users generated around forty-six hundred terabytes of data, and all told—across the entire internet—that probably doesn’t make it sound like Opera Mini sees much use. But data in Opera Mini is heavily compressed, up to 90%. If this data were uncompressed, those same users would have transferred almost thirty thousand terabytes.

We laugh off Opera as something “nobody uses,” but it isn’t a matter of the user downloading and clicking the wrong icon. It’s a matter of reclaiming some of that power from us—some agency—whether they realize it or not. It’s about giving users a voice in how they experience the web. It’s about choice.

But then we don’t include it in our list of “supported browsers.” “Nobody uses Opera.” “Just get Chrome; it’s better.” Our thumbs are on the scales. The user can’t win.

But that’s their time, not ours. Their data plans—that’s their money, not ours.

After all, users don’t expect us to be perfect. They just need us to understand that they’re not either, and to help them get things done anyway.
—Sara Wachter-Boettcher and Eric Meyer, Design for Real Life

We put a lotta focus on “delight,” as an industry. But even me, for all my browsing privilege—when I want to book a flight or check the balance of my bank account, “delight” isn’t real high on my list of wants—“booking a flight” and “checking the balance of my bank account” are. I want to accomplish my task quickly and get on with my day.

Cognitive load associated with stressful situations
Waiting in line at a retail store~.575
Watching a melodramatic TV show~.6
Standing at the edge of a virtual cliff~.75
Watching a horror movie~.8
Experiencing mobile delays~.825
Solving a math problem~.825
Ericsson Mobility Report, Feb. 2016

And regardless of their browsing context, their circumstances, what it is they came to the website to do—I just don’t want to make anybody miserable.

A study performed by Ericsson early this year found that delays in loading a mobile website caused, on average, a 38 percent increase in heart rate, and an increased stress level roughly on par with watching a horror movie or answering math problems. I don’t want to put anyone through that.

A small, illustrated icon of a red heart, centered in an otherwise empty slide (shown previously)

I hope some of that resonated with some of you, but it’s okay if it didn’t—there are a lot of reasons to care, and none of them are wrong.

I fell into this job, the same way I’m sure a lot of you did. And back in my early twenties, what kept me motivated was—well, it was me. I wanted something better for myself. I wanted to prove that I could be something I wasn’t “supposed” to be; I wanted to be the first one in my family to not just have a job with a desk, but to carve our name into it for all to see.

That didn’t hold up, though; not for very long. That kinda anger can only carry you so far. I ran out. This became “just a job.”

It still is just a job; making websites is just a job, but that’s what doing that job means to me.

A small illustrated icon of crossed wrenches, centered in an otherwise empty slide (shown previously)

This next part: this is the easiest part. We’re gonna walk through some of the techniques I use to build performant websites. There’s gonna be code; hell, fair warning, there’s gonna be PHP in there.

But listen: I don’t want you to sweat the code too much, because y’know, I don’t know how to do all the things I’m about to talk about—and that’s okay. I don’t want to be the big damn hero that knows everything about performance, and I don’t want you to be either. What I want is to involve my team, to apply their strengths to performance work, and to bring some of these concerns into their comfort zones instead of trying to force them into mine.

The Critical Path title slide

The lion’s share of performance work revolves around what’s called “the critical rendering path.”

Icons representing an empty browser window and a browser window containing a rendered page, connected by a short horizontal line. Beneath it, identical icons connected by a long horizontal line.

The critical rendering path refers to the time and number of steps the browser has to take between making the initial request for a website, and being able to render that website in the browser. Even though the files themselves are usually tiny, requests for external scripts and stylesheets can have a huge impact on the critical path—on the time it takes for the website to actually appear.

  <link href="blocking.css" rel="stylesheet">
  <script src="blocking.js"></script>
 <script src="non-blocking.js"></script>

When we talk about performance, you’ll hear a lot about “blocking requests”—assets that lengthen the critical path—meaning that the page won’t even start to render until those assets have been requested and fully transferred.

Any stylesheets we include in our markup will prevent the page from rendering until those assets have been fully transferred and parsed.

The same goes for any JavaScript files we include in the head of the document. But not JavaScript files that come after your markup.

  <link href="all.css" rel="stylesheet">
  <link href="medium.css" media="(min-width: 35em)" rel="stylesheet">

  <link href="large.css" media="(min-width: 55em)" rel="stylesheet"> 

The link element, for stylesheets, uses a media attribute that you might recognize from responsive images. In a perfect world, we’d be able to use it to serve a user only the stylesheets that apply to their browsing context.

A line chart, first showing a request for index.html, and requests for medium.css and large.css that start as the request for index.html completes. large.css concludes later than medium.css, after which an icon representing a fully rendered page is shown.

But even if a stylesheet doesn’t apply—and could never apply—the browser prevents the page from rendering until that asset has been fully downloaded.

There’s a good reason for that, too: media queries are designed to respond to changes in context; window size, resolution, and so on. If we didn’t load a stylesheet until a media query applied, we could end up with a flash of unstyled content whenever the user’s context changed—like resizing their browser window. Worse, if their connection dropped out while browsing, they could get stuck with no styles at all.

An image of an Android device with a 600px wide display.

  <link href="all.css" rel="stylesheet"> 
  <link href="medium.css" media="(min-width: 35em)" rel="stylesheet">
  <link href="large.css" media="(min-width: 55em)" rel="stylesheet"> 

The good news is that some modern browsers will raise or lower the priority of that stylesheet based on that attribute. Those deprioritized requests won’t prevent the page from rendering.

A line chart, first showing a request for index.html, then requests for medium.css and large.css. The request for medium.css starts as the request for index.html completes. An icon representing a fully rendered page is shown when medium.css concludes, at which point the request for large.css begins.

In these browsers, the stylesheets that are necessary to render the page right away will still block rendering, but the others are loaded after the fact.

Screenshot of the LoadCSS GitHub repository landing page.


Unfortunately, that deprioritization only happens in brand new browsers—in order to load stylesheets asynchronously for everyone, we’d need to use a little JavaScript.

A line chart, first showing a request for index.html, then a number of requests for various external scripts and stylesheets. Those requests all start as the request for index.html completes. An icon representing a fully rendered page is shown when index.html concludes.

Of course, the shortest possible critical path would mean asyncing all of our assets, including our primary—or only—stylesheet. But a “rendered” website with no styles isn’t much of an improvement over an empty window, and having the styles snap in afterwards would be pretty jarring. Fast, but janky.

The CriticalCSS approach allows us to shorten the critical path to its logical extreme using a little smoke and mirrors; a website that appears fully rendered for the user, in the time it takes to make the initial request for the HTML.

~14KB sent on initial TCP connection Mobile Analysis in PageSpeed Insights

The first step in the critical path is the initial round-trip from the server to the browser, and that new TCP/IP connection can include up to 10 TCP packets—about 14KB of data. That’s more than enough to get our markup into the browser so it can determine what other requests need to be made.

By putting our critical styles in the head of the page, we end up delivering a visually complete page in as much time as it takes to make that initial TCP/IP connection. Then we defer the requests for any other stylesheets until a fraction of a second after render. It makes a tremendous difference: what appears to be a fully rendered website in as much time as it takes for the browser to say “I’d like to see this website.”

We’ll still load the rest of our stylesheets, of course, but we’ll do it in a way that doesn’t block the page render. There are some brand new standards in the works that will allow us to load stylesheets asynchronously at the browser level, but they’re not quite ready yet—so we’ll use a little JavaScript to load our non-critical CSS asynchronously, for now.

    /* A large block of minified CSS here */

 function loadCSS( href ){
 /* The loadCSS script here. */
 loadCSS( "all.css" );

With CriticalCSS approach in play, the head of our pages should look something like this.

We’ll have a block of our critical CSS inlined, followed by our loadCSS script. We call that function with the path to each of our CSS files, and they’re all loaded asynchronously—but it all happens quickly enough that there’s no visual lag, even if the user scrolls quickly.

Screenshot of the NPM landing page for the grunt-criticalcss project.

Grunt CriticalCSS

Since combing through a stylesheet for the styles we’d need to make the page appear rendered right away would be a nightmare, we automate the process with a Grunt task.

module.exports = function(grunt) {
  grunt.loadNpmTasks( "grunt-criticalcss" );

  var path = require( "path" ).resolve( "src/themes/bocoup/assets" );

  grunt.config( "criticalcss", {
    homepage: {
    options: {
      outputfile: path + "/critical/home.css",
      filename: path + "/style.css",
      url: "<%=baseurl%>?nocritical"
  work: {
    options: {
      outputfile: path + "/critical/work.css",
      filename: path + "/style.css",
      url: "<%=baseurl%>/work?nocritical"

I like to think that we came up with a pretty clever approach to this.

For any page on the site that we wanted to enhance with CriticalCSS, we’d add a configuration for that page type in the Gruntfile. The task would generate a file named for that page and drop it into a critical directory.

Because we were using Vagrant, our local dev environments matched the way the site would be built on the server side. So, here, we determine the current URL—local or remote—and pass it directly to this task. So, these files get generated on the server the same way we’ve set them up locally. Nothing to upload, nothing to commit.

Screenshot of a pull request in the private github repo, filed by Tyler Kellen, titled “automate build process”

Tyler Kellen

Now, full disclosure: I’m workin’ on it, but honestly, I don’t know how to do that. I can put together a Grunt task well enough, but I don’t have any real background in deployment.

Bocoup does. That gave me an opportunity: I had a more meaningful issue to file: help me accomplish this thing—this very weird, very specific, very doable thing—and here’s why.

Done deal. Our CriticalCSS is generated on deployment, without us ever thinking about it.

function load_css( $slug ) {
  $criticalCssPath = TEMPLATEPATH . '/assets/critical/' . $slug . '.css';
  // Try to get the critical CSS for this page:
  $critical = @file_get_contents( $criticalCssPath );
  // Make sure the file exists, and isn’t empty:
  $criticalCssNotEmpty = $critical !== false && strlen( $critical ) !== 0;
  // Determine whether we should use CriticalCSS in the first place:
  $useCriticalCss = $criticalCssNotEmpty && (
    !isset( $_COOKIE['stylesCached'] ) || isset( $_GET['nocritical'] )
  $css = cachebusted( '/assets/' . ($localDev ? 'style.css' : 'style.min.css' ));
  // Serve up critical CSS if we need it
  if ( $useCriticalCss ) {
    $output = '<style>' . $critical . '</style>';
 $output .= '<script>';
 $output .= ' window.enhance = {};!function(){function e(e,n,t){function o(){for(var n,r=0;r<d.length;r+)d[r].href&&d[r].href.indexOf(e)>-1&&(n=! /* This code has been truncated; see for full script */';
 $output .= ' window.enhance.loadCSS( "' . $css . '" );';
 $output .= " document.cookie = 'stylesCached=true; expires=0; path=/';";
 $output .= '</script>';
 } else {
 $output = '<link rel="stylesheet" type="text/css" href="'. $css .'">';
 echo $output;

From there, adding any sort of manual step to the build process—“generate your CriticalCSS and copy it to this file”—would’ve been doomed from the start. Instead, we made the website smarter about how those files are loaded—instead of a plain ol’ link to the stylesheet in the head of our documents, we generated that markup using a PHP function.

At the time the page is rendered, this function sees if there’s CriticalCSS file that matches the current page.

It fetches the contents of the file, then checks to make sure it actually got something out of it.

Then it determines whether we’ve set a cookie saying that our CriticalCSS has already been loaded. This cookie serves as a proxy for the state of the browser cache. If we’ve already loaded a page, it means we could fetch our CSS from the cache instead, and there’s no reason to bother with the CriticalCSS approach—pulling from the local cache would be faster.

If this is our first time loading a page, and we’ll have to fetch our stylesheets from the server, this function drops the contents of the CriticalCSS file directly into the page’s markup, then goes on to inject our LoadCSS function, the rest of our stylesheets, and set that cache-proxy cookie.

Screenshot of the Google Image search results page for “cool lizards.” Many lizards are pictured, all of them inarguably cool.

Jenn Schiffer on Twitter

Full disclosure again: I’m not great at PHP. I couldn’t write this function.

That gave me another specific issue to file: I asked Jenn Schiffer to help me out with it. I had a specific task, an excuse to dig into the reasoning behind it, and an opportunity to elevate this stuff to everyone on the team. To get more people directly involved.

…Jenn asked that I represent her contribution to the project, visually, with the Google Images results for the phrase “cool lizards.”

Jenn Schiffer, everyone.

Asynchronous Webfonts title slide

Once we have a page showing up as quickly as possible, we can start looking for more specific points of failure—and webfonts are one of the biggest ones.

Screenshot of the previous iteration of the homepage, absent any text, resulting in only the MediaTemple logo and several empty blocks of solid color.

Most browsers will wait about three seconds for a font to load before giving up and showing the fallback text, which is a lifetime in performance terms. But WebKit-based browsers—Safari, older Android, and Blackberry—wait 30 seconds. This is a huge single point of failure for a site, no matter how fast it loads otherwise.

@font-face {
  font-family: 'WebFont';
  src: url('webfont.eot');
  src: url('webfont.eot?#iefix') format('embedded-opentype'),
       url('webfont.woff') format('woff'),
       url('webfont.ttf') format('truetype');
  font-weight: normal;

@font-face {
  font-family: 'WebFont';
  src: url('webfont-bold.eot');
  src: url('webfont-bold.eot?#iefix') format('embedded-opentype'),
       url('webfont-bold.woff') format('woff'),
       url('webfont-bold.ttf') format('truetype');
  font-weight: bold;

Font-face has a pretty smart syntax; the fallback pattern is built-in, so no matter what format a browser needs, we’re really only looking at one request for each file. For a while, I advocated loading all our type asynchronously—and that still works, but the issue isn’t loading the fonts so much as applying them. If our stylesheet calls for text to use a font that isn’t available at render time, that text isn’t shown until the font is available.

CSS Font Loading Module Level 3 specification

So, instead of messing with @font-face—and potentially breaking a really smart native loading pattern—the “font events” spec gives us a JavaScript API that tells us when our font files are fully loaded. We can then apply them to the page. Since they’re not being applied during the initial render, they won’t block rendering.

  document.fonts.ready().then(function() {
    document.documentElement.setAttribute( "class", "fonts-ready" );

 p {
 font-family: sans-serif;
 .fonts-ready p {
 font-family: Garage Gothic;

The font events API is pretty simple; a couple lines of JavaScript to say “once all the fonts are fully loaded, add a fonts-ready class to the HTML element. Then in our CSS, we qualify the webfont family with that class. That’s it; you’re done. No changes to anything else. Now you might see your fallback fonts for a split second before the webfont is loaded, but I honestly consider that a feature—whatever it is that your user wanted to read is available right away.

  function avoid_font_events() {
    $avoid = isset( $_COOKIE['stylesCached'] );
    if ($avoid) {
      echo ' fonts-loaded';

We took this a little further by setting a cookie the first time our fonts are loaded—same as we did with our CriticalCSS—saying to avoid the JavaScript solution altogether if the cache is already primed. This PHP function adds the fonts-loaded class to the markup as we render it if that cookie is present, since the browser already has the font files at the ready.

Screenshot of the Font Face Observer GitHub repository landing page

Font Face Observer

This is a brand new spec; I think it’s only available in Chrome. Bram Stein put together a great polyfill for it, though.

A Comprehensive Guide to Font Loading Strategies

For as simple as it is to implement, asynchronous font application is a surprisingly complex topic—new approaches seem to be rolling out all the time, and new standards are being developed that would allow us to control the application of webfonts at the browser level.

For the time being, though, what we have now is working pretty well for us—and it took all of fifteen minutes to set up.

Screenshot of the Critical Webfonts blog post, linked below

Critical Webfonts

The approach I think I’m most excited about is named “Critical Webfonts.”

Just like the name implies, this approach uses the same kind of smoke-and-mirrors as the CriticalCSS approach: we progressively apply our fonts in a way that hides our machinations from the user when they first hit the page.

Two (unitless) timelines, starting at an icon representing an empty browser window, and ending at an icon representing a fully rendered page. The first line has a white line leading to a mark at around 20%, followed by a solid blue line. The second line has a while line leading to a mark around 5%, followed by a dotted blue line, then a second mark at around 20% followed by a solid blue line.

Instead of showing our fallback fonts until we’ve finished loading the full webfont and all the files for its weights and styles, we load a subset of the webfont—a font file containing just letters, numbers, and basic punctuation—in the base style and weight.

The full fonts are still loaded at more or less the same time, and we still don’t hide any text from the user—we just chip away some of the time the user spends with fallback fonts. In fact, they often won’t see them at all.

Now, again, showing the user a split second of unstyled text isn’t the end of the world—it’s sure as hell better than stopping them from doing what they need to do. But having all those fonts snap in all at once after the fact can cause a pretty major reflow, and that could be jarring. Partially, though? This approach is for us; it’s for me. Because I like webfonts; “90% typography,” right? I want users seeing our design as we designed it; I don’t want to compromise on that. If we’re clever about what we show the user, we don’t have to.

Screenshot of the landing page

For generating the subsetted fonts, I use fontsquirrel. There are command line tools out there that do it; Typekit and Google Fonts both give you the option of subsetting. This doesn’t really add much overhead—I mean, how often do you have to go back and tinker with your font files themselves, day-to-day?

  var bodySubset = new w.FontFaceObserver( "Lato Subset", {}),
     subsetFonts = [ bodySubset.check() ];

    .all( subsetFonts )
    .all( function() {
      w.document.documentElement.className += "subsets-ready";

Once we’ve generated the subsetted fonts, implementation is almost identical to the handful of lines of JavaScript and CSS we’re using on right now, but it works in two stages. First, JavaScript applies a class to the document once our tiny, subsetted font files have finished loading.

  var body = new w.FontFaceObserver( "Lato", {}),
        bodyBold = new w.FontFaceObserver( "Lato", {
        weight: "bold"
      bodyItal = new w.FontFaceObserver( "Lato", {
        style: "italic"
      bodyBoldItal = new w.FontFaceObserver( "Lato", {
        weight: "bold",
        style: "italic"
      fullFonts = [ body.check(), bodyBold.check(), bodyItal.check(), bodyBoldItal.check() ];

    .all( fullFonts )
    .all( function() {
      w.document.documentElement.className += "all-fonts-ready";

We then do the exact same thing with our full-sized font files and all their alternate weights and styles: once all of those files are loaded, apply a class to the document.

p {
  font-family: sans-serif;
.subset-ready p {
  font-family: "Lato Subset", sans-serif;
.all-fonts-ready p {
  font-family: "Lato", sans-serif;

CSS does the rest. First, our web fonts are qualified by the subset class—then, once all the other font files have finished loading and the all fonts class has been applied, the subset styling is overridden.

This is a little bit trickier than what we’re using on, and I don’t know that we’d go back and change what we have now—I mean, it’s working just fine for us. This wouldn’t take much more time to implement, though.

Responsive Images title slide

Alright, show of hands: how many of you saw this subject coming? That’s right—that’s right. It’s time to talk about responsive images. Are we ready? I am going to detail how our merry band of developers—led by the illustrious Chair of the Responsive Issues Community Group, speccers of the responsive images spec itself—implemented responsive images on

Responsive Images Now Landed in WordPress Core

We used WordPress.

  srcset="feat-383x287.jpg 383w,
          feat-500x375.jpg 500w, 
          feat-640x480.jpg 640w,
          feat-700x525.jpg 700w,
          feat.jpg 800w"
  sizes="(min-width: 75em) 400px, 
         (min-width: 55em) 33.4vw,
         (min-width: 42.5em) 40vw, 

Okay, yes, there’s a little more to that story—I did a little bit of tinkering with our markup and our compression, but not very much. Bottom line, the tooling around responsive images is getting better and better with time. It’s becoming seamless; just a part of the way websites are put together, baked right into our CMSes.

Responsive Images: Use Cases and Documented Code Snippets to Get You Started

Responsive images were a real messy topic, not too long ago. Hell, I subjected people to hour-long talks about it—poring over syntaxes and use cases and the many, many fights picked with browsers and standards reps alike. It was a messy topic by necessity—“changing the way images are rendered on the web” was a real big boat to rock, and complex problems often mean complex solutions.

It is extremely rare where one optimization lets us knock off such a significant amount of page weight, but here we are staring one such technique right in the face.
72% less image weight.
—Tim Kadlec, Why We Need Responsive Images

The RICG’s goal was to give us options for delivering only the image sources that are appropriate to that user. Serving the same images to everyone means that a user on a small, low-resolution display bears all the bandwidth costs of massive, high-resolution images, but ends up with none of the benefits. A high resolution image on a low resolution display looks like any other low resolution image; it just takes longer.

Thanks to everyone that had their hands up, we have those options now. And with WordPress—and with many, many CMSes like it—we barely have to think about it. A user uploads an image, and everything else takes place behind the scenes: generating the alternate cuts, the markup, everything. That tooling is provided for us.

That’s tremendously exciting. That’s you and me changing the whole web for the better.

Performance Budgets title slide

After all we’ve done, it’s incredibly important that we don’t consider our performance work finished—a few blocking requests or an uncompressed PNG and we’re in trouble again. You can’t rely on everyone who might ever touch the site pinky-swearing that they’ll try to keep things fast.


So, for the non-JavaScripters: unit testing is a way of testing your JavaScript for regressions. When you’re working on a feature, you start by writing a test for that feature—which fails. Once you finish that feature, the test passes—but it also sticks around. Every time you repeat this process, you’re not just running the one test: you’re running all the tests you’d written previously, to make sure you don’t break anything else. This performance budget Grunt task works like one of those tests: If you’ve gone over budget, your build process fails.

Screenshot of a terminal window, showing perfbudget task completing successfully


This didn’t really work for us. In order to run this task, it needs a public-facing URL—which meant our local tests would be running against the live website. Stopping the build locally wouldn’t do us any good. In fact, stopping the build locally would prevent us from fixing the issues that put us over budget in the first place.


So, we went with a completely external service. Calibre runs scheduled performance tests against the live site and notifies us when we go over budget. It ensures that we’re performance testing against the real content that’s been uploaded to the site, potential server issues, and so on—real world conditions. If something breaks, we’re all notified, and we can push a fix right away.

Screenshot of a Slack channel named #bocoupcom, populated by passing and failing performance budget notifications

And it has Slack integration, which means it isn’t me running a test manually when I think to do it then storming around to investigate and yell at people—it’s not just my problem, it’s everyone’s problem.

Screenshot of the PageSpeed Insights results page for, with a 99/100 score for “speed”

PageSpeed Insights

So, what’d all of this get us? A fast website, for one thing—no question of that. More than that, though, this project showed everyone at Bocoup what we were able to accomplish, performance-wise. It was a little more work, sure, but the kind of work that led to a real, measurable result—no open-ended GitHub issue could ever have led to that.

…That one point, if you’re wondering? That’s because of Google’s analytics; the script doesn’t have far-future cache headers set.

Kills me.

Speed Index
First View: 711
Repeat View: 671

And that’s why my parents gave me a first name that fits in “high score” spaces.

It earned us some bragging rights, sure, but that’s not what’s really important to me.

I am the greatest; I said that even before I knew I was.
—Muhammad Ali, Greatest of All Time

…I mean, okay. It’s a little bit important to me.

@wilto enjoy that view for a little while. I am about ready to upgrade our site with http2 and server-push…
Scott Jehl

…And, y’know, a little friendly competition never hurt anybody. Inclusive Web Development

More than any of that, though, the underlying message—that we can build something faster than anyone expects; something that works for everyone—that message resonated with the team. It’s something everybody at Bocoup owns now; some in smaller parts, some in larger ones. It’s everyone’s concern, and it’s something we want to make a part of our contribution to the direction the web is headed.

I hope everyone here feels the same way. I hope we all manage to beat back the tide a little bit—to find ways to sneak in those small decisions that, together, will guide the web toward something better, faster, and more inclusive.

Performance Under Pressure title slide, as seen previously