Having described the problem of defining and measuring site speed in the first part, let’s now look at how to improve it.
Improving Site Speed
Most developers I’ve spoken to believe improving site speed is about reducing object sizes, so that less data is moved across the wire. Sure, that’s one thing to improve, but Figure 2 in our previous post gives you a good sense that there are often other important problems.
In my experience, there are six important areas of focus. I’ve ranked them here from what I believe is the most to least important for a large, modern web company:
- Reducing DNS lookups
- Reducing the number of HTTP requests
- Improving parallelization of requests
- Putting data closer to customers
- Reducing file size
1. Reducing DNS lookups
Why are DNS lookups the number one issue? It’s because DNS servers have a high variance in response times, typically dependent on their load and configuration. There are also often many network hops in looking up the IP address that matches a domain name. Given that website owners don’t have control over the DNS infrastructure of the web, it’s important to reduce the number of DNS requests to improve site speed and availability. How do you do this? Well, you reduce the number of domains and subdomains to which the customer’s browser makes requests. If you hook fiddler or another tool up to Google’s product search, you’ll see it only uses four subdomains / domains. At eBay, we use more, and it’s something we’re working on.
2. Reducing the number of HTTP requests
The second issue is reducing HTTP requests. Each HTTP request ties up a connection between the user’s browser and the server, and also has a fixed overhead (it requires setup, transfer, and acknowledgement of the transmission). Having fewer HTTP requests means lower fixed cost overhead, and better use of the concurrent bandwidth between the browser and the server. In July 2009, we had nearly 200 GET requests to compose our results page, and we’re now down to just over 100. We still have a way to go, and plenty of great ideas to get there. One of the easier ones is “spriting” static images into a single image, and slicing and dicing them on the client. To see this in its extreme, take a look at one of Google’s sprites for their product search – this is something they’ve done well.
3. Improving parallelization of requests
The third area is keeping the browser busy, making sure it’s fetching as much as it can. Browsers typically fetch up to 7 resources at the same time, but typically place limits on how many simultaneous requests they make to one subdomain. Most large properties use tricks to get around this. For example, we serve our thumbnail images from several “thumbs” domains, which actually point to the same machines. This means the browser will happily open more simultaneous connections. Of course, the flipside is more DNS lookups. It seems that most folks in large web properties are settling for 2 or 3 image serving subdomains these days. It’s also important to think about how the page is constructed, to make sure that the right elements are fetched in the right order; we do lots of experimentation to make sure we build the page in the best order possible for our customers.
4. Putting data closer to customers
6. Reducing file size
Our work on site speed over the last year has delivered amazing results to our customers. Our users are buying at least 3% more each week than they used to, simply because they get what they want faster and can use their time better. Of course, this is also great for our sellers who build their businesses on eBay. We continue to work on site speed, and we’ve got some real rocket science in the hopper that’ll transform our experience to a new, faster level. I’ll write about that again someday soon. Thanks for stopping by!
Hugh E. Williams
Vice President, Buyer Experience Development