Recommended Hardware Specs for Production Servers

You are here:
< Back
Recommended Hardware Specs for Production Servers
Last Updated: 25 Dec 2005

*** PLEASE NOTE: Link(s), If Provided, May Be Wrapped ***

Here are some recommended hardware configs for Web servers,
SQL (or other database) servers, email, terminal services,
and other application-type servers.

In general, I prefer Rack Mountable systems...

These configs were designed around Microsoft software and
(pre-HP) Compaq hardware, but for the most part, there
should be no problem substituting another vendor's
equipment.  There should be little difficulty in deriving
specs for additional server roles:


NOTE: These specs were originally created using 9GB,
      18GB and 36GB drives.  It has recently been
      modified to take into account 73GB drives. Using
      other drive sizes, particularly with SATA drives,
      will allow for different configurations, possibly
      eliminating the need for systems with more than 8
      drive bays, or external chassis.

      These system specs reflect recommendations for a
      standard enterprise environment of 2500-5000 nodes,
      with general productivity applications.  Adjust
      accordingly for your environment (larger or smaller)

      Whenever possible, get the fastest machines that
      are available for development, but make sure your
      developers are not making performance assumptions
      that will not bear out in the real world.


On the various mailing lists and forums, there are
faily frequent debates about the merits and advantages
of building a system yourself, as opposed to going with
Compaq, Dell, HP, IBM, etc.

One man's expense is another man's insurance, and each
person's experience will shape his or her decisions.

I am not opposed to the concept of building machines,
and anyone who has ordered systems from Compaq or HP
will realize that you still have to put the components
together unless you're willing to pay the integrator
to do that (which is generally a waste until they
fully understand your environment), but the real issue
comes down to what do you get for your money...

If you are managing 5 servers, in a 9x5 environment,
with a limited budget, then by all means build your
servers and possibly your workstations, as you will
save tons of money, and get better components.

OTOH, if you're managing 100 servers, in a 24x7
environment, with a robust budget, then the benefits
of building your servers is greatly minimized.
This is when it is a great time to outsource the pain
of component procurement to the OEM, in return for
promises of 4-hour turnaround on support and other
similar benefits.  Also, when you make a larger order
with one of the big boys, you are able to get better
discounts than you would as an individual component
shopper.  Generally speaking, the price advantages of
building your own system are smaller than they were 5
or 6 years ago.

Another issue to consider is scalability. You will be
hard pressed to find the parts for (and successfully
build) a cost-effective 8-way system, without going
to one of the aforementioned vendors.

As a compromise, you might want to work with a small,
local Systems Integrator that you've built up a good
relationship with. This will allow you to have more
flexibility in selecting the parts that you want,
while still benefiting from volume pricing discounts
and better support policies.


How does one decide how many CPUs and how much RAM is
needed in any given system? You could break out PERFMON
and find out the specifics for your environment, but once
you've built and deployed a few servers, you'll have
developed a reasonable baseline for performance based on
the various types of server functions you use regularly.

Depending on the intended role of the server, it may
benefit more from extra RAM than additonal CPUs.  Not
every application that you deploy will be CPU-bound, so
recklessly adding CPUs will likely result in a more
expensive machine with less bang for the buck.

Some server applications, however, simply require more
processing power. What we should do is establish a
guideline for choosing what combinations of components
to put together for the best price/performance ratio.

The following represents some safe generalizations that
we can make with regards to the number of CPUs and the
amount of RAM needed for common applications or services:

• CPU Classifications:
  * 1C: Single CPU
  * 2C: Dual CPUs
  * 4C: Quad CPUs
  * 8C: Massive SMP (Eight-Way Systems and Above)

• RAM Classifications:
  * LR: Low RAM ...............  512MB
  * MR: Moderate RAM .......... 1024MB (1GB)
  * HR: High RAM .............. 2048MB (2GB)
  * XR: Xtra RAM .............. 4096MB (4GB) and beyond

Now, putting these two categories together, results
in a workable baseline:

• Domain Controller (NT4) ..... 1C-LR
• Domain Controller (Win2K) ... 2C-MR/HR
• Domain Controller (Win2K3) .. 2C-MR/HR
• DNS or WINS Server .......... 1C-LR
• File Server ................. 1C-HR
• Groupware Server ............ 2C-HR/XR
• SMTP Mail Server ............ 1C-MR
• Mailing List Server ......... 2C-HR
• Database Server ............. 4C-HR/XR
• DataWarehouse Server ........ 8C-XR
• Terminal/Citrix Server ...... 4C-XR
• Web Server (static HTML) .... 1C-LR/MR (should be load-balanced anyway)
• Web Server (dynamic pages) .. 2C-MR/HR (even with load-balancing)
• Proxy Server ................ 2C-MR/HR
• Firewall .................... 1C-LR to 2C-MR (depending on OS & functionality)

Of course, you can exceed these specs, (and you may
have to, if you stick too many things on a single box),
but this should be adequate for the vast majority of

For instance, a 4-CPU Web Server is generally a waste,
regardless of the size of your environment, as you're
doing more file serving than code crunching.  Now, for
an application server (even a Web App Server), the extra
CPUs can come in handy.

NOTE: Be advised that operating systems such as Linux
      and BSD will generally (but not always) require
      less RAM to perform the same functions as their
      Windows counterparts, so you can drop those
      boxes down a notch or two, RAM-wise.

      Opteron-based systems are faster than their XEON
      counterparts, even in 32-bit mode, while running
      at a much lower Ghz.  Intel's EM64T CPUs should
      alleviate this disparity somewhat in 64-bit mode.




•,10227,6001498,00.html  (Page 17)




• Dec 2005: Increased some RAM recommendations. NT4 is
            pretty much dead at this point (or should be).
            This update includes Front-End and Back-End
            configurations for Exchange 2003.  RAID5 is
            discouraged for high-performance Exchange 2003
            configs.  Also added an Exchange 2003-specific
            PDF attachment.

• Dec 2004: Added Opteron as options to all the specs.
            Change ATA drives/RAID to SATA drives/RAID.
            Removed min processor speeds from all Dev
            and QA systems, due to the lack of common
            speeds across the different CPU architectures.
            NT4 remains in the list, but I'll get rid
            of it next year.

• Aug 2004: Updated RAM specs from 256MB min to 512MB min

• Aug 2004: This will likely be the last update that
            contains any reference to NT4

• The specs as listed in this document and accompanying
  PDF file assume an average business environment with
  800-1200 users, using general productivity software.

• The use of alternate operating systems might slightly
  decrease the RAM requirements.

• Among x86 vendors, I favor hardware from Compaq/HP
  and IBM, particularly at the high-end.

• The Standard Server Edition of Windows 2003 Server is
  also limited to 4 CPUs, up from a limit of 2 during
  the early beta cycle.

• The Web Server Edition of Windows 2003 Server, maxes
  out at 2 CPUs, emphasizing horizontal vs vertical
  scaling for web servers.

• Application servers, OTOH, can easily need 4 CPUs, if
  lots of business logic, or bad development techniques
  are involved.

• ATA RAID is highly recommended for development or
  test environments, and for low-budget environment.
  Increasingly, ATA RAID is becoming a serious challenger
  to SCSI for lower-end systems.