Accelerating the Internet (or actually, Squid) with Varnish

Squid is an old, working-horse, caching proxy server that can be configurated to act as a reverse proxy. Varnish is the opposite, it’s an extremely fast http accellerator that’s configurated to be, well, just that. So I thought, just for the fun of it, what about configurating Varnish to cache the Internet for me, that is, use it as a general forwarding caching proxy server.

Obviously, we can’t define varnish backends for the entire world. But Squid can do that. So I used our corporate Squid proxy, and put a local varnish cache in front of it. The vcl is very simple:

backend default {
  # This is squidbox
  .host = "";
  .port = "3128";

That’s it, actually. Start up varnish, and use that varnish instance’s address and http port as proxy in your web browser.

Then, using an ugly little perl script “proxytest” for testing, we found these quite interesting results:

$ for i in "" squidbox:3128 varnishbox:6081; do ./proxytest $i 10; done

Lesson learned: Varnish is some 10 times faster than Squid, when caching the Internet!

With thanks to eric for playing with settings.


5 Responses to “Accelerating the Internet (or actually, Squid) with Varnish”

  1. Tomasz says:

    Don’t you have problems with varnish not returning partial content? For example, Youtube videos are sent to client only after they are downloaded fully, not during the download. And yum hits the timelimit before bigger packages are downloaded and returned by varnish.

  2. ingvar says:

    Tomasz: Dunno. I did this most for fun, not as part of a production system. If you test, the results would be interesting.


  3. Dave says:

    I know you were primarily intending to show proof of concept and not performance, but since you made a performance comparison …

    Could you please post a few details of the configuration of your test? Were varnishbox and squidbox similar hardware, network bandwidth (gigabit?), and load? Since your varnishbox test is after squidbox, note that varnishbox is given an unfair advantage of a pre-loaded squid cache. Presumably it takes squid 1/10th of your “” time, or about .1 second, to load the cache initially no fault of its own. When you said ‘local varnish cache’, you didn’t mean that “varnishbox” is localhost, did you? That would be an extremely unfair comparison to a corporate squid cache.

    I also wonder what the squid.conf maximum_object_size_in_memory is set to, as squid is notoriously bad with large in-memory objects and does much better with disk cache, taking advantage of the Linux filesystem cache.


  4. ingvar says:


    This post was most for fun, and not meant as a scientific performance test.

    Such a test could be done, of course, but that would take some time, some spare hardware, at not least, a few weeks of squid and varnish tuning training, to achieve fair results. I don’t have those resources handy. Besides, I used our corporate squid proxy as a forwarder.

    On the other hand, that Varnish is faster and more well-designed for http acceleration than Squid in general terms, seems fairly well documented. A Google search will tell.

    For the fun of it, I installed varnish and squid on my workstation (some nobrand 1GB RAM, Intel E6550 @2.33GHz CPU, ST3250410AS disk), and ran the test script against slashdot directly from localhost, that is, with varnish _not_ using squid as a backend. Serving from localhost over the lo interface, network bandwith should not be a factor.

    I did no special tuning; Squid: default fedora config, removed logging. added cache_mem 256 MB, pipeline_prefetch on, maximum_object_size 4096 KB, maximum_object_size_in_memory 4096 KB (as stated above, I have no experience with squid tuning). Varnish: Default fedora configuration, no tuning.

    On cold caches (fetch from backend, then serve 9 times from cache) varnish outperforms squid in a magnitude of 4. On hot caches (just serving content from cache, 10 repetitions), varnish outperforms squid in a magnitude of 50 on this hardware.

    Lowering maximum_object_size_in_memory to for example 8 KB makes no visible change in the results.


  5. Dave says:


    Thanks, that’s a more fair comparison and presumably has repeatable results. I can’t think of an explanation why you had a factor of 50 for hot caches though, when in your previous test there was a factor of 10 and it seems the first one was cold and the second hot. For a hot cache test it shouldn’t matter that your previous configuration sent varnish through squid and this configuration didn’t. Ah, maybe the difference is that the previous configuration had squidbox and varnishbox on different machines over a network, not on localhost. Is that right?

    I found your post because mostly I use squid as a proxy, not as an accelerator. Currently I’m more bandwidth-limited than CPU-limited so I’m not seriously considering using varnish as a front end to squid but maybe I would when I get to use 10gbit interfaces.



Leave a Reply