ReduceBandWidth

Note: The recipes here are for PmWiki versions 0.6 and 1.0 only. For PmWiki 2.0 recipes, see Cookbook.


Goal

Reducing the used bandwidth on the web server.

Probably quite a few are on a data-limit. After reaching the data limit, the site either is blocked for a while, or the host requires some extra cash from you.

Solution

It might be possible to reduce the amount of data throughput with very little effort. Most browsers are able to use data compression when transferring data (even IE does). Only thing to do is to have the output of php compressed, depending on the capabilities of the browser making the request.

Depending on the compilation of php on your host (PHP compiled with "--with-zlib") this might be as easy as adding:
ob_start("ob_gzhandler"); as first line to your local.php script.

Two requests on my test system, the first without gzhandler:

  • 192.168.0.1 - - [12/Jan/1994:10:08:32 +0100] "GET /ARCHIVE/public_html/ HTTP/1.1" 200 8666 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; T312461)"
  • 192.168.0.1 - - [12/Jan/1994:10:08:53 +0100] "GET /ARCHIVE/public_html/ HTTP/1.1" 200 2100 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; T312461)"

(The clock is broken on my test machine; this isn't a message from the past :) )

In this case a reduction of 75%, compression depends on your pages, I see reductions between 50% and 80%.

Sure, it uses a bit more CPU power on server and client site, but we are on a data-limit, not cpu-limit. Actually transfer of the data could be faster, since we transfer less bits.

http://www.php.net/manual/en/function.ob-gzhandler.php

-bram-

Discussion

=note Another approach to resolving this problem would be to combine compression with page caching (e.g., SimplePageCache). This would save cpu cycles and time on the server. I may do this as an extension in pmwiki-0.6. --<#>?

=note Right, in case you are using the SimplePageCache you could modify the ob_start in the cache script as described below. However nesting is allowed with ob_start, so it would work as described as well. BrBrBr

=note CPU cycles are generally a non-issue. Modern machines compress content at the rate of several MB per second -- much faster than PmWiki can possibly generate the output. On the other hand, from my reading of the PHP documentation, using ob_start() has the disadvantage of delaying output until the entire page has been generated, which might be undesirable for long pages. To avoid this, it might be necessary to use mod_gzip instead. -- Reimer Behrends

=note Thanks for the extra information. Compression does put an extra load on the machine, regardless how fast it is. You are right that the extra effort should not be an issue. (Unless you have to pay for cpu-cycles like in my old VAX-VMS times)
Not everyone ( I think most can't) can add the mod_gzip. The (short) delay might anoy users, however in total the page will load faster. Regardless of the compressing, I would recommend to split pages that are that long.
Another advantage of using ob_start is that you can put header function whereever you like in the code, even if the script already produced output. Even a misplaced space (like after the ?>) will not mess up the output.

How to check zlib

Create a script (test.php or whatever you like) and put it on your server:

 
<?
phpinfo();
?>

look for zlib

How to check the result

In case you can see the raw log files, simply look at the size before and after adding the gz-handler.

Alternative

An alternative is to install and enable mod_gzip in the Apache configuration. This will transparently compress anything served by Apache that is compressible. pmwiki-2.2.133 -- Last modified by {{Anno}}

from IP: 85.171.160.186 ip should be disabled by default for security reasons