sl/plan9

I put domain names in /etc/hosts

ks (static nginx on i5)
cat-v.org.ovh (werc on rc-httpd on 9front with cwfs on kvm on i5)
cat-v.org.rc-httpd (rc-httpd on rc-httpd on 9front with cwfs on kvm on i5)
cat-v.org.ramnode

$ ping cat-v.org
PING cat-v.org (176.31.253.70) 56(84) bytes of data.
64 bytes from ks (176.31.253.70): icmp_req=1 ttl=60 time=9.23 ms
64 bytes from ks (176.31.253.70): icmp_req=2 ttl=60 time=9.23 ms

time wget -O /dev/null -q http://cat-v.org ) 2>&1 | grep real

p->ks
real 0m0.034s
real 0m0.039s
real 0m0.034s
real 0m0.034s
real 0m0.034s
real 0m0.033s

p->cat-v.org.ovh
real 0m0.170s
real 0m0.163s
real 0m0.159s
real 0m0.169s
real 0m0.159s
real 0m0.162s

p->cat-v.org.rc-httpd
real 0m0.064s
real 0m0.065s
real 0m0.062s
real 0m0.069s
real 0m0.057s
real 0m0.067s

p->cat-v.org.ramnode
real 0m0.431s
real 0m0.436s
real 0m0.432s
real 0m0.427s
real 0m0.438s
real 0m0.432s
real 0m0.438s

run 10 times: time ( for bla in `seq 1 100`; do ( time wget -O /dev/null -q
http://cat-v.org ) 2>&1 | grep real & done; wait)
100 requests started in parallel take about 10s on cat-v.org on ramnode and 11s on
cat-v.org.
on ovh, nginx only needs 430ms for the whole thing.  With 400 requests to nginx in
parallel this takes a bit longer: 1.8s

If I cut werc out of the loop (only use rc-httpd) I can see the whole thing being
done in 4s.  If I up the number of parallel request to 200 I get 9s.

Even if 9front wouldn't cache, the linux host that carries the disk images does it
either way.