What one can and should optimize
- Hardware
- OS / libraries
- SQL server (setup and queries)
- API
- Application
Optimizing hardware for MySQL
- If you need big tables ( > 2G), you should consider using 64 bit
hardware like Alpha, Sparc or the upcoming IA64. As MySQL uses a lot
of 64 bit integers internally, 64 bit CPUs will give much better
performance. - For large databases, the optimization order is normally
RAM, Fast disks, CPU power. - More RAM can speed up key updates by keeping most of the
used key pages in RAM. - If you are not using transaction-safe tables or have big disks and want
to avoid long file checks, a UPS is good idea to be able to take the
system down nicely in case of a power failure. - For systems where the database is on a dedicated server, one should
look at 1G Ethernet. Latency is as important as throughput.
Optimizing disks
- Have one dedicated disk for the system, programs and for temporary files.
If you do very many changes, put the update logs and transactions logs
on dedicated disks. - Low seek time is important for the database disk; For big tables you can
estimate that you will need:
log(row_count) / log(index_block_length/3*2/(key_length + data_ptr_length))+1
seeks to find a row. For a table with 500,000 rows indexing a medium int:
log(500,000)/log(1024/3*2/(3+4)) +1 = 4 seeks
The above index would require: 500,000 * 7 * 3/2 = 5.2M. In real life,
most of the blocks will be buffered, so probably only 1-2 seeks are needed. - For writes you will need (as above) 4 seek requests, however, to
find where to place the new key, and normally 2 seeks to update the
index and write the row. - For REALLY big databases, your application will be bound by the speed of
your disk seeks, which increase by N log N as you get more data. - Split databases and tables over different disks. In MySQL you can
use symbolic links for this. - Striping disks (RAID 0) will increase both read and write throughput.
- Striping with mirroring (RAID 0+1) will give you safety and increase the
read speed. Write speed will be slightly lower. - Don’t use mirroring or RAID (except RAID 0) on the disk for temporary files
or for data that can be easily re-generated.. - On Linux use
hdparm -m16 -d1
on the disks on boot to enable
reading/writing of multiple sectors at a time, and DMA. This may
increase the response time by 5-50 %. - On Linux, mount the disks with
async
(default) andnoatime
. - For some specific application, one may want to have a ram disk for
some very specific tables, but normally this is not needed.
Optimizing OS
- No swap; If you have memory problems, add more RAM instead or configure
your system to use less memory. - Don’t use NFS disks for data (you will have problems with NFS locking).
- Increase number of open files for system and for the SQL server.
(addulimit -n #
in thesafe_mysqld
script). - Increase the number of processes and threads for the system.
- If you have relatively few big tables, tell your file system to
not break up the file on different cylinders (Solaris). - Use file systems that support big files (Solaris).
- Choose which file system to use; Reiserfs on Linux is very fast for
open, read and write. File checks take just a couple of seconds.
Choosing API
- PERL
- Portable programs between OS and databases
- Good for quick prototyping
- One should use the DBI/DBD interface
- PHP
- Simpler to learn than PERL.
- Uses less resources than PERL, which makes it good for embedding
in Web servers. - One can get more speed by upgrading to PHP4.
- C
- The native interface to MySQL.
- Faster and gives more control
- Lower level, so you have to work more.
- C++
- Higher level gives you more time to code your application.
- Is still in development.
- ODBC
- Works on Windows and Unix
- Almost portable between different SQL servers.
- Slow; MyODBC, which is a simple pass-through driver is 19 % slower
than using a native interface. - Many ways to do the same thing; Hard to get things to work as many
ODBC drivers have different bugs in different areas. - Problematic; Microsoft changes the interface once in a while.
- Insecure future (Microsoft pushes more for OLE than for ODBC).
- JDBC
- In theory portable between OS and databases.
- Can be run in the web client.
- Python + others
- May be fine, but we don’t use them.
Optimizing the application
- One should concentrate on solving the problem.
- When writing the application one should decide what is most important:
- Speed
- Portability between OS
- Portability between SQL servers
- Use persistent connections.
- Cache things in your application to lessen the load of the SQL server.
- Don’t query columns that you don’t need in your application.
- Don’t use
SELECT * FROM table_name...
- Benchmark all parts of your application, but put the most effort into
benchmarking the whole application under the worst possible ‘reasonable’
load. By doing this in a modular fashion you should be able to replace the
found bottleneck with a fast ‘dummy module’, you can then easily identify
the next bottleneck (and so on). - Use
LOCK TABLES
if you do a lot of changes in a batch; For example group
multipleUPDATES
orDELETES
together.
Portable applications should use
- Perl DBI/DBD
- ODBC
- JDBC
- Python (or any other language that has a generalized SQL interface)
- You should only use SQL constructs which exist in all the target SQL
servers or can easily be emulated with other constructs. The crash-me
pages on http://www.mysql.com can help you with this. - Write wrappers to provide missing functionality for other OSes / SQL servers.
If you need more speed, you should:
- Find the bottleneck (CPU, disk, memory, SQL server, OS, API, or application)
and concentrate on solving this. - Use extensions that give you more speed / flexibility.
- Get to know your SQL server so that you can use the fastest possible
SQL constructs for your problem and avoid bottlenecks. - Optimize your table layouts and queries.
- Use replication to get more select speed.
- If you have a slow net connection to the database, use the compressed
client/server protocol.
Don’t be afraid to make the first version of your application not
perfectly portable; when you have solved your problem, you can always
optimize it later.
Optimizing MySQL
- Choose compiler and compiler options.
- Find the best MySQL startup options for your system.
- Scan the the MySQL manual and read Paul DuBois’ MySQL book.
- Use
EXPLAIN SELECT
,SHOW VARIABLES
,SHOW STATUS
andSHOW PROCESSLIST
. - Learn how the query optimizer works.
- Optimize your table formats.
- Maintain your tables (
myisamchk
,CHECK TABLE
,OPTIMIZE TABLE
). - Use MySQL extensions to get things done faster.
- Write a MySQL UDF function if you notice that you would need some
function in many places. - Don’t use
GRANT
on table level or column level if you don’t really need it. - Pay for MySQL support and get help to solve your problem 🙂
Compiling and installing MySQL
- By choosing the best possible compiler for your system, you can usually get
10-30 % better performance. - On Linux/Intel, compile MySQL with pgcc. (The Pentium optimized version
of gcc). The binary will only work with Intel Pentium CPUs, however. - Use the optimize options that are recommended in the MySQL manual for a
particular platform. - Normally a native compiler for a specific CPU (like Sun Workshop for Sparc)
should give better performance than gcc, but this is not always the case. - Compile MySQL with only the character sets you are going to use.
- Compile the mysqld executable statically
(with--with-mysqld-ldflags=-all-static
) and strip the final executable
withstrip sql/mysqld
. - Note that as MySQL doesn’t use C++ exceptions, compiling MySQL without
exceptions support will give a big performance win! - Use native threads (instead of the included mit-pthreads) if your OS supports
native threads. - Test the resulting binary with the MySQL benchmark test.
Maintenance
- If possible, run
OPTIMIZE table
once in a while. This is especially
important on variable size rows that are updated a lot. - Update the key distribution statistics in your tables once in a while
withmyisamchk -a
; Remember to take down MySQL before doing this! - If you get fragmented files, it may be worth it to copy all files to
another disk, clear the old disk and copy the files back. - If you have problems, check your tables with myisamchk or
CHECK table
. - Monitor MySQL status with:
mysqladmin -i10 processlist extended-status
- With the MySQL GUI client you can monitor the process list and the status
in different windows. - Use
mysqladmin debug
to get information about locks and performance.
Optimizing SQL
Use SQL for the things it’s good at, and do other things in your application.
Use the SQL server to:
- Find rows based on
WHERE
clause. JOIN
tablesGROUP BY
ORDER BY
DISTINCT
Don’t use an SQL server:
- To validate data (like date)
- As a calculator
Tips
- Use keys wisely.
- Keys are good for searches, but bad for inserts / updates of key columns.
- Keep by data in the 3rd normal database form, but don’t be afraid of
duplicating information or creating summary tables if you need more
speed. - Instead of doing a lot of
GROUP BY
s on a big table, create
summary tables of the big table and query this instead. UPDATE table set count=count+1 where key_column=constant
is very fast!- For log tables, it’s probably better to generate summary tables from them
once in a while than try to keep the summary tables live. - Take advantage of default values on
INSERT
.
Speed difference between different SQL servers (times in seconds)
Reading 2000000 rows by key: | NT | Linux | |
mysql | 367 | 249 | |
mysql_odbc | 464 | ||
db2_odbc | 1206 | ||
informix_odbc | 121126 | ||
ms-sql_odbc | 1634 | ||
oracle_odbc | 20800 | ||
solid_odbc | 877 | ||
sybase_odbc | 17614 | ||
Inserting (350768) rows: | NT | Linux | |
mysql | 381 | 206 | |
mysql_odbc | 619 | ||
db2_odbc | 3460 | ||
informix_odbc | 2692 | ||
ms-sql_odbc | 4012 | ||
oracle_odbc | 11291 | ||
solid_odbc | 1801 | ||
sybase_odbc | 4802 |
In the above test, MySQL was run with a 8M cache; the other databases
were run with installations defaults.
Important MySQL startup options
back_log |
Change if you do a lot of new connections. |
thread_cache_size |
Change if you do a lot of new connections. |
key_buffer_size |
Pool for index pages; Can be made very big |
bdb_cache_size |
Record and key cache used by BDB tables. |
table_cache |
Change if you have many tables or simultaneous connections |
delay_key_write |
Set if you need to buffer all key writes |
log_slow_queries |
Find queries that takes a lot of time |
max_heap_table_size |
Used with GROUP BY |
sort_buffer |
Used with ORDER BY and GROUP BY |
myisam_sort_buffer_size |
Used with REPAIR TABLE |
join_buffer_size |
When doing a join without keys |
Optimizing tables
- MySQL has a rich set of different types. You should try to use the
most efficient type for each column. - The
ANALYSE
procedure can help you find the optimal types for a table:
SELECT * FROM table_name PROCEDURE ANALYSE()
- Use
NOT NULL
for columns which will not store null values. This is
particularly important for columns which you index. - Change your ISAM tables to MyISAM.
- If possible, create your tables with a fixed table format.
- Don’t create indexes you are not going to use.
- Use the fact that MySQL can search on a prefix of an index; If you have
andINDEX (a,b)
, you don’t need an index on(a)
. - Instead of creating an index on long
CHAR
/VARCHAR
column, index just a prefix
of the column to save space.
CREATE TABLE table_name (hostname CHAR(255) not null, index(hostname(10)))
- Use the most efficient table type for each table.
- Columns with identical information in different tables should be
declared identically and have identical names.
How MySQL stores data
- Databases are stored as directories.
- Tables are stored as files.
- Columns are stored in the files in dynamic length or fixed size format.
In BDB tables the data is stored in pages. - Memory-based tables are supported.
- Databases and tables can be symbolically linked from different disks.
- On Windows MySQL supports internal symbolic links to databases with
.sym
files.
MySQL table types
- HEAP tables; Fixed row size tables that are only stored in memory and
indexed with a HASH index. - ISAM tables; The old B-tree table format in MySQL 3.22.
- MyISAM tables; New version of the ISAM tables with a lot of extensions:
- Binary portability.
- Index on NULL columns.
- Less fragmentation for dynamic-size rows than ISAM tables.
- Support for big files.
- Better index compression.
- Better key statistics.
- Better and faster auto_increment handling.
- Berkeley DB (BDB) tables from Sleepycat:
Transaction-safe (withBEGIN WORK
/COMMIT
|ROLLBACK
).
MySQL row types (only relevant for ISAM/MyISAM tables)
- MySQL will create the table in fixed size table format if all
columns are of fixed length format (noVARCHAR
,BLOB
orTEXT
columns).
If not, the table is created in dynamic-size format. - Fixed-size format is much faster and more secure than the dynamic format.
- The dynamic-size row format normally takes up less space but may be
fragmented over time if the table is updated a lot. - In some cases it’s worth it to move all
VARCHAR
,BLOB
andTEXT
columns
to another table just to get more speed on the main table. - With
myisampack
(pack_isam
for ISAM) one can create a read-only, packed
table. This minimizes disk usage which is very nice when using slow disks.
The packed tables are perfect to use on log tables which one will not
update anymore.
MySQL caches (shared between all threads, allocated once)
- Key cache ; key_buffer_size, default 8M
- Table cache ; table_cache, default 64
- Thread cache ; thread_cache_size, default 0.
- Hostname cache ; Changeable at compile time, default 128.
- Memory mapped tables ; Currently only used for compressed tables.
Note that MySQL doesn’t have a row cache, but lets the OS handle this!
MySQL buffer variables (not shared, allocated on demand)
- sort_buffer ;
ORDER BY
/GROUP BY
- record_buffer ; Scanning tables
- join_buffer_size ; Joining without keys
- myisam_sort_buffer_size ;
REPAIR TABLE
- net_buffer_length ; For reading the SQL statement and buffering
the result. - tmp_table_size ; HEAP-table-size for temporary results.
How the MySQL table cache works
- Each open instance of a MyISAM table uses an index file and a data
file. If a table is used by two threads or used twice in the same query,
MyISAM will share the index file but will open another instance of the
data file. - The cache will temporarily grow larger than the table cache size if all
tables in the cache are in use. If this happens, the next table that is
released will be closed. - You can check if your table cache is too small by checking the mysqld
variable Opened_tables. If this value is high you should increase your
table cache!
MySQL extensions / optimization that gives you speed
- Use the optimal table type (HEAP, MyISAM, or BDB tables).
- Use optimal columns for your data.
- Use fixed row size if possible.
- Use the different lock types (
SELECT HIGH_PRIORITY
,INSERT LOW_PRIORITY
) Auto_increment
REPLACE
(REPLACE INTO table_name VALUES (...)
)INSERT DELAYED
LOAD DATA INFILE
/LOAD_FILE()
- Use multi-row
INSERT
to insert many rows at a time. SELECT INTO OUTFILE
LEFT JOIN
,STRAIGHT JOIN
LEFT JOIN
combined withIS NULL
ORDER BY
can use keys in some cases.- If you only query columns that are in one index, only the index tree will
be used to resolve the query. - Joins are normally faster than subselects (this is true for most SQL
servers). LIMIT
SELECT * from table1 WHERE a > 10 LIMIT 10,20
DELETE * from table1 WHERE a > 10 LIMIT 10
foo IN (
list of constants)
is very optimized.GET_LOCK()
/RELEASE_LOCK()
LOCK TABLES
INSERT
andSELECT
can run concurrently.- UDF functions that can be loaded into a running server.
- Compressed read-only tables.
CREATE TEMPORARY TABLE
CREATE TABLE .. SELECT
- MyISAM tables with RAID option to split a file over many files to get
over the 2G limit on some file system. Delayed_keys
- Replication
When MySQL uses indexes
- Using
>, >=, =, <, <=, IF NULL and BETWEEN on a key.
SELECT * FROM table_name WHERE key_part1=1 and key_part2 >
5;SELECT * FROM table_name WHERE key_part1 IS NULL;
- When you use a LIKE that doesn't start with a wildcard.
SELECT * FROM table_name WHERE key_part1 LIKE 'jani%'
- Retrieving rows from other tables when performing joins.
SELECT * from t1,t2 where t1.col=t2.key_part
- Find the
MAX()
orMIN()
value for a specific index.SELECT MIN(key_part2),MAX(key_part2) FROM table_name where key_part1=10
ORDER BY
orGROUP BY
on a prefix of a key.SELECT * FROM foo ORDER BY key_part1,key_part2,key_part3
- When all columns used in the query are part of one key.
SELECT key_part3 FROM table_name WHERE key_part1=1
When MySQL doesn’t use an index
- Indexes are NOT used if MySQL can calculate that it will probably be
faster to scan the whole table. For example ifkey_part1
is evenly
distributed between 1 and 100, it’s not good to use an index in the
following query:SELECT * FROM table_name where key_part1 > 1 and key_part1 < 90
- If you are using HEAP tables and you don’t search on all key parts with =
- When you use
ORDER BY
on a HEAP table - If you are not using the first key part
SELECT * FROM table_name WHERE key_part2=1
- If you are using
LIKE
that starts with a wildcardSELECT * FROM table_name WHERE key_part1 LIKE '%jani%'
- When you search on one index and do an ORDER BY on another
SELECT * from table_name WHERE key_part1 = # ORDER BY key2
Learn to use EXPLAIN
Use EXPLAIN
on every query that you think is too slow!
mysql> explain select t3.DateOfAction, t1.TransactionID -> from t1 join t2 join t3 -> where t2.ID = t1.TransactionID and t3.ID = t2.GroupID -> order by t3.DateOfAction, t1.TransactionID; +-------+--------+---------------+---------+---------+------------------+------+---------------------------------+ | table | type | possible_keys | key | key_len | ref | rows | Extra | +-------+--------+---------------+---------+---------+------------------+------+---------------------------------+ | t1 | ALL | NULL | NULL | NULL | NULL | 11 | Using temporary; Using filesort | | t2 | ref | ID | ID | 4 | t1.TransactionID | 13 | | | t3 | eq_ref | PRIMARY | PRIMARY | 4 | t2.GroupID | 1 | | +-------+--------+---------------+---------+---------+------------------+------+---------------------------------+
Types ALL
and range
signal a potential problem.
Learn to use SHOW PROCESSLIST
Use SHOW processlist
to find out what is going on:
+----+-------+-----------+----+---------+------+--------------+-------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------+-----------+----+---------+------+--------------+-------------------------------------+ | 6 | monty | localhost | bp | Query | 15 | Sending data | select * from station,station as s1 | | 8 | monty | localhost | | Query | 0 | | show processlist | +----+-------+-----------+----+---------+------+--------------+-------------------------------------+
Use KILL
in mysql
or mysqladmin
to kill off runaway threads.
How to find out how MySQL solves a query
Run the following commands and try to understand the output:
SHOW VARIABLES;
SHOW COLUMNS FROM ...\G
EXPLAIN SELECT ...\G
FLUSH STATUS;
SELECT ...;
SHOW STATUS;
MySQL is extremely good
- For logging.
- When you do many connects; connect is very fast.
- Where you use
SELECT
andINSERT
at the same time. - When you don’t combine updates with selects that take a long time.
- When most selects/updates are using unique keys.
- When you use many tables without long conflicting locks.
- When you have big tables (MySQL uses a very compact table format).
Things to avoid with MySQL
- Updates to a table or
INSERT
on a table with deleted rows,
combined withSELECTS
that take a long time. HAVING
on things you can have in aWHERE
clause.JOINS
without using keys or keys which are not unique enough.JOINS
on columns that have different column types.- Using HEAP tables when not using a full key match with
=
- Forgetting a
WHERE
clause withUPDATE
orDELETE
in the MySQL
monitor. If you tend to do this, use the--i-am-a-dummy
option to themysq
client.
Different locks in MySQL
- Internal table locks.
LOCK TABLES
(Works on all table types)GET_LOCK()
/RELEASE_LOCK()
- Page locks (for BDB tables)
ALTER TABLE
also does a table lock on BDB tables.LOCK TABLES
gives you multiple readers on a table or one writer.- Normally a
WRITE
lock has higher priority than aREAD
lock to avoid
starving the writers. For writers that are not important one can use
theLOW_PRIORITY
keyword to let the lock handler prefer readers.UPDATE LOW_PRIORITY SET value=10 WHERE id=10;
Tricks to give MySQL more information to solve things better
Note that you can always comment out a MySQL feature to make the query portable:
SELECT /*! SQL_BUFFER_RESULTS */ ...
SELECT SQL_BUFFER_RESULTS ...
Will force MySQL to make a temporary result set. As soon as the temporary
set is done, all locks on the tables are released. This can help when
you get a problem with table locks or when it takes a long time to
transfer the result to the client.SELECT SQL_SMALL_RESULT ... GROUP BY ...
To tell the optimizer that the result set will only contain a few rows.SELECT SQL_BIG_RESULT ... GROUP BY ...
To tell the optimizer that the result set will contain many rows.SELECT STRAIGHT_JOIN ...
Forces the optimizer to join the tables in the order in which they are
listed in theFROM
clause.SELECT ... FROM
table_name [USE INDEX (index_list) | IGNORE INDEX (index_list)] table_name2
Forces MySQL to use/ignore the listed indexes.
Example of doing transactions
- How to do a transaction with MyISAM tables:
mysql> LOCK TABLES trans READ, customer WRITE; mysql> select sum(value) from trans where customer_id=some_id; mysql> update customer set total_value=sum_from_previous_statement where customer_id=some_id; mysql> UNLOCK TABLES;
- How to do a transaction with Berkeley DB tables:
mysql> BEGIN WORK; mysql> select sum(value) from trans where customer_id=some_id; mysql> update customer set total_value=sum_from_previous_statement where customer_id=some_id; mysql> COMMIT;
- Note that you can often avoid transactions altogether by doing:
UPDATE customer SET value=value+new_value WHERE customer_id=some_id;
Example of using REPLACE
REPLACE
works exactly like INSERT
, except that if an old record in
the table has the same value as a new record on a unique index, the
old record is deleted before the new record is inserted.
Instead of using
SELECT 1 FROM t1 WHERE key=# IF found-row LOCK TABLES t1 DELETE FROM t1 WHERE key1=# INSERT INTO t1 VALUES (...) UNLOCK TABLES t1; ENDIF
Do
REPLACE INTO t1 VALUES (...)
General tips
- Use short primary keys. Use numbers, not strings, when joining tables.
- When using multi-part keys, the first part should be the most-used key.
- When in doubt, use columns with more duplicates first to get better
key compression. - If you run the client and MySQL server on the same machine, use sockets
instead of TCP/IP when connecting to MySQL (this can give you up to a
7.5 % improvement). You can do this by specifying no hostname orlocalhost
when connecting to the MySQL server. - Use
--skip-locking
(default on some OSes) if possible. This will turn off
external locking and will give better performance. - Use application-level hashed values instead of using long keys:
SELECT * FROM table_name WHERE hash=MD5(concat(col1,col2)) AND col_1='constant' AND col_2='constant'
- Store BLOB’s that you need to access as files in files. Store only
the file name in the database. - It is faster to remove all rows than to remove a large part of the rows.
- If SQL is not fast enough, take a look at the lower level interfaces
to access the data.
Benefits of using MySQL 3.23
- MyISAM ; Portable BIG table format
- HEAP ; In memory tables
- Berkeley DB ; Transactional tables from Sleepycat.
- A lot of raised limits
- Dynamic character sets
- More
STATUS
variables. CHECK
andREPAIR
table.- Faster
GROUP BY
andDISTINCT
LEFT JOIN ... IF NULL
optimization.CREATE TABLE ... SELECT
CREATE TEMPORARY table_name (...)
- Automatic conversion of temporary HEAP to MyISAM tables
- Replication
- mysqlhotcopy script.
Important features that we are actively working on
- Improving transactions
- Fail safe replication
- Text searching
- Delete with many tables (Updates with many tables will be done after this.)
- Better key cache
- Atomic
RENAME
(RENAME TABLE foo as foo_old, foo_new as foo
) - A query cache
MERGE TABLES
- A better GUI client
Author: Tim Smith
Today I want to do a review on Hostcabin (http://www.hostcabin.com). Hostcabin is a hosting service I started using years ago and referred several of my clients to them. In the beginning they were great but as time has gone on the service has become somew said,
February 7, 2010 at 6:51 pm
[…] Optimizing MySQL Tips (scvinodkumar.wordpress.com) […]
abhishek gupta said,
February 26, 2010 at 10:21 am
Thanks …..this is great help to all the mysql user. Wold like to be in touch with you from email and facebook
root said,
April 19, 2012 at 9:46 pm
Dude… Seriously… First of all: It’s Perl, not PERL.
> Portable programs between OS and databases
Yeah well, DBI is great. I’ve been using Perl since 5.005 (some time 1999 I believe when DBI first emerged…) but nearly every scripting language has some facility which will allow you to access your Database through some kind of standard. Perl has DBIx, Ruby has ActiveRecord, Python has SQLAlchemy and so on… This has _nothing_ to do with optimizing!
> Good for quick prototyping
Prototyping is only required when there is no / no complete analysis. Sometimes I need to prototype, but then I don’t need access to a DB. Prototyping is best done with R or Octave… And anyway… For things like prototyping there are enough CASE tools. No need, to do it in a special language.
> One should use the DBI/DBD interface
Better use their Extensions like DBIx::Class oder Rose::DB which provide a high level of abstraction from the database and the code.
> Simpler to learn than PERL.
And this helps you with optimizing MySQL how…?
> Uses less resources than PERL, which makes it good for
> embedding in Web servers.
Quite on the contrary. Have a look at these:
http://www.skitoy.com/p/performance-of-python-php-and-perl/160
http://tnx.nl/php.html
http://tnx.nl/php5.html
> One can get more speed by upgrading to PHP4
You can get a “speedup” by generating static content from your database and if you need CGI then you can always use C, which should be fastest…