Maintaining acceptable page load times (even during peak traffic) is a key component in converting users to customers. Given that site administrators generally have roughly 8 seconds to capture the attention of users before the initial impulse to abandon, site administrators must focus relentlessly on delivering up content as quickly as possible. Furthermore, preparing for traffic spikes is perhaps just as important, as site behavior can change drastically under load. Herein we’ve included some practices that we’ve learned across many engagements which, combined, can significantly improve the performance and scalability of a Magento store.

Hardware profile

Magento was designed with bare metal in mind. That is to say that while the application will run on a thin cloud account (for staging purposes), it will quickly collapse in a highly trafficked production site due to it’s demands on memory and processing power. For that reason, we like to recommend (as a minimum ideal prod setup) a 2+2 configuration:

  • Box #1 serves the Magento frontend (behind a load balancer)
  • Box #2 serves the Magento backend and acts as a frontend failover
  • Box #3 is a read/write database for the Magento frontend
  • Box #4 is a cloned read/write database for the Magento backend, and acts as a failover for the frontend database.

Generally, the reasoning behind this architecture is twofold… by putting failover in place we can accommodate the inevitable system outage without cascading to an application failure. The separation between front and backend is driven by an observation that many of the bulk updates and reporting features that are available to administrative users are extremely intensive, and can significantly impact customer facing performance.

Finally, we have found that leveraging SSDs helps to reduce read times, which accounts for the majority of customer facing lag.

Webserver profile

Typically, Magento runs on Apache with a series of overrides in place in the “.htaccess” file. This is a normal setup for a web application, and as such is familiar to a general audience of LAMP stack developers, but we have found that in a production environment there are some improvements to be made.

  • If the sysadmin team is sufficiently comfortable with moving away from a standard LAMP setup, swap Apache for NGINX, which is a lightweight web server that conveys a significant performance advantage.
  • Alternately, move Apache settings out of .htaccess into Apache settings (httpd.conf) so that they need not be processed on the fly. This can achieve most of the gains you might see with NGINX, without introducing foreign technology.
Webserver config

Beyond the initial selection of server technology, there are a number of optimizations to be made to tune the server to Magento’s resource usage profile. Specifically, Magento uses a great deal of memory and the deep file structure can create lag related to lookup. To compensate, we recommend a few settings:

  • Maximize php_value memory_limit to 512M
  • Disable open_basedir (too intensive with deep file structure)
  • Enable realpath_cache, set realpath_cache_size = 2M
  • Disable Etags (media expiry trumps validation)
  • Enable session Keepalive (in NGINX 75 sec / default is good)

In a load balanced environment, the most important concern regarding sessions is to ensure that users maintain state even when they move between physical machines. This can be accomplished by storing sessions in MySQL or a centralized Memcache server, but we’ve found that the most performant setup leverages Redis. We recommend setting up a Redis instance on the primary database server to handle the session key/value lookup.


Full page caching is a distinguishing characteristic of EE vs CE, and it speeds up load time considerable. Essentially, for a period of time, page content is drawn from memory rather than from the database. A default configuration stores the cache in the filesystem, but this is not as performant as a key/value lookup and it loses efficacy in a load balanced environment (implying a cache per webhead). We like implementing FPC, and then augmenting with a reverse proxy, time allowing:

  • Leverage a centralized Redis instance for FPC storage
  • Implement a Varnish reverse proxy hole-punch for page sections that are generally static

Byte code compilation enables the web server to convert human readable PHP into machine language before it called up for execution, saving the repeated step on each page load. Extensios are available for both Apache (APC) and NGINX (FPM), depending on the architecture. Additionally, Magento offers a compilation “setting” which copies executable code into a single directory to save file traversing costs. Altogether distinct functions, both are good practices and can bear fruit.


Both media and text elements can be compressed into lighter weight packets before leaving the server. While text element compression is algorithmic, media compression generally requires a human eye to ensure that the media does not degrade to the degree that it no longer conveys the marketing message. The techniques that we recommend, specifically, include:

  • Image optimization via GD2 parameters
  • Text compression via PHP zlib.output_compression, GZIP in NGINX
Static file handling

Those files that contain no server-side executable code (like CSS and JS) may be combined and stripped to minimize their own load times. Specifically, both CSS and JS files may be merged via Magento system setting. The advantage here is that Magento can then circumvent a browser-side behavior that limits the number of concurrent downloads in a given page from a single source. Minification (whitespace stripping) can be achieved through standard PHP libraries (like JSMin).

MySQL config

Likewise at the database level there are a few optimizations that can help to improve overall performance, including caching and regular maintenance. We do not, however, recommend separating the read/write nodes in local.xml, as we’ve seen network lag result in non transactional calls (i.e. an update on an object that doesn’t exist yet). We have seen good results with:

  • Optimizing the query_cache_size (roughly 64M is a good starting point)
  • Running a regular maintenance script to clear out the logs
  • Running MySQL OPTIMIZE TABLE regularly, to reorganize physical storage after bulk inserts.

Content distribution networks host static media on behalf of the primary servers, thereby alleviating the load on Apache/NGINX and allowing media to load quicker. Specifically, CDNs can spread serve up static content from a location that is closest to the end consumer, reducing network lag. Some lessons we’ve learned when implementing CDNs include:

  • Choose a CDN that fails over to the source, to handle dynamically generated thumbnails
  • Script devalidation routines, to expire media that is no longer relevant
  • Set media headers to future expiry, reduce unnecessary reloads
Magento config

There are a few Magento setting values that site administrators should consider when publishing to a production environment:

  • Enable flat catalog for categories and products to circumvent Magento’s EAV data model
  • Limit the max number products on list pages to a reasonable amount (like 30), or lazyload further products
  • Avoid (ioncube) encrypted extensions
Template optimization

The Magento template layer is essentially markup with some limited embedded logic, and as such should follow general webdesign best practices (ensuring W3C compliance). In designing for mobile devices (see separate document), consider serving images in proportion to screen size.

Scaling to load

Performance under load can vary significantly as compared with performance at low traffic. Being ready for a windfall event starts with knowing your goals: how many concurrent users could you expect to support, and what are the pagoda times you would expect with that kind of traffic. Once you have established a baseline of expectations:

  • Test with siege (easy) / jmeter (hard) to see emulate high concurrent traffic
  • Set up automated resource monitoring to trigger burst events based on thresholds (i.e. memory allocation)
  • Leverage a cloud service (like AWS) to provision additional web heads behind an established load balancer
  • Draw directly from a designated repository and mount centralized media
  • Dial down resources when load normalizes