Storage and storage performance are critical components of Adobe Experience Manager, yet too many organizations overlook or misunderstand provisioning for proper storage, leading to poor end user experience, authoring experience and application response times.
Enterprises are investing a great deal of money and time on AEM deployments these days, and for good reason: it’s the most powerful, best of breed web content management and marketing platform in the world. But with that power comes complexity, and without the right tools and talent in place, companies are likely not getting all the performance the platform offers.
Take storage and provisioning. Storage leveraged for the AEM repository, whether its EMC SAN, AWS Elastic Block Storage or Azure Premium Storage, is critical to how the AEM application will perform for end users visiting websites built on AEM, as well as the content editors building the websites within AEM.
As a senior solutions architect for AEM at Rackspace, I design AEM infrastructure for our customers who use Rackspace Application Services for AEM, and I can’t stress enough the importance of using high performance storage, and setting up the repository correctly when the AEM Application is first deployed.
Regular maintenance of the content repository is extremely important, as AEM can easily experience downtime if checks and balances are not implemented around storage capacity and performance. AEM is heavy on random reads and sequential writes. Typically your publisher tier will be very read heavy while your author is more write heavy. Author and Publish server performance can be limited by storage capacity/IOPS, CPU and memory allocation.
Now, any change in AEM creates an immutable node graph, so the previous complete state is still available. AEM is also append only, which makes it easy to recover both content and the repository state from fatal errors but it also means disk growth is always a concern. A few hundred gigabytes of data created in a couple of days is not uncommon due to bad code, product bugs or legitimate authoring activity, and that can trigger downtime without the appropriate maintenance and monitoring.
How we do it
At Rackspace, we have successfully scaled author storage performance to handle hundreds of concurrent authors hitting the same author server from multiple geographic regions. This is a real use case that can and will be experienced by enterprise organizations with content authors on different teams around the world.
We also utilize different storage options based on the chosen deployment technology. For private cloud deployments, we utilize enterprise-grade EMC arrays with Brocade Enterprise Director Class Switching and fabric. Tiering of the storage array can be leveraged to save costs if needed, and we recommend leveraging Fast Cache for higher performance.
Rackspace has out of the box EMC arrays as well as bespoke arrays built to meet specific application performance requirements. Block storage is always preferred over NFS for the segment store as there are incompatible locking mechanisms with the internal JCR and NFS native locking.
For public cloud deployments on AWS, we provision Elastic Block storage volumes with 3,000 or higher provisioned IOPS to meet the performance requirements of AEM publishers and authors. AWS allows you to choose the number of Provisioned IOPS you want at an additional cost. Provisioned IOPS EBS Volumes are preferred, with General Purpose caching for an S3 Data Store.
On Azure, we leverage premium disks, which allow for a max amount of IOPS based on the size of the disk. This is important because you may only need 20GB for a segment store but you may be required to use a much larger disk in order to achieve the required IOPS. Always separate the segment store and data store.
Repository separation: node store and binary store
At Rackspace, we separate out the segment store (node store) part of the repository from the data store (binary store/blob store) on all of our customer builds. Segment store and data store should be logically separated for a few reasons.
Performance: In an ideal AEM repository configuration, the segment store will be stored entirely in memory. Most AEM repositories have a data store which is too large to allow this, therefore we always separate them out.
With the segment store stored entirely in RAM, you will see a massive increase in performance. The physical disk (Bare Metal), vDisk(VMware), EBS Volume (AWS) or Managed Disk (Azure) leveraged for the segment store should also be high performance RAID10 storage ideally SSD/Flash based with a sufficient amount of IOPS available.
When sizing your AEM application servers, make sure to allocate enough RAM for an appropriate sized Heap, Linux allocation and finally, enough excess RAM available for the repository segment store (MaxDirectMemorySize). This off-heap area will cache the segment store in RAM allowing for hundreds of times faster reads than from disk.
Maintenance: By splitting the segment store and data store, you then have more flexible options for repository maintenances. By having a combined repository, the only way to reclaim disk space is is via a tar compaction (or RevisionGC). This can be a lengthy activity, depending on the number of changes to the repository — and in some AEM versions must be completed with the AEM instance offline.
This means specific publishers will be out of the load balancer pool and your author will be offline during the maintenance. Separating the data store from the segment store allows you to do an online DatastoreGC, which can be very fast and can be done with the instance still up. In newer AEM versions with repositories under 1TB these maintenance processes — when run regularly (weekly) — can reclaim tens or hundreds of GB in a few seconds or minutes.
Additionally, having moved to a split FileDataStore model, you will (on AEM 6.4) be able to take advantage of a new “tail compaction” maintenance task, which can be run online with even greater frequency, and only compacts data which has been added since the last compaction, giving you the advantage of always having an AEM instance which is lean and performant.
Storage flexibility: Separating out the segment store and data store allows you to place each on their own storage media. Segment stores can be placed on SSD volumes or Flash storage tiers for high performance, while your data store can leverage less expensive lower performance storage.
If leveraging AWS for you AEM deployment, you can use S3 for the data store, which is really inexpensive and can be shared between publishers and author, thus reducing storage costs even more. Splitting these as well as logging onto separate volumes allows simultaneous writes to the different disks, further improving performance and removing potential bottlenecks.
The task of separating out the segment store and data store must be completed upon initial installation when AEM is unpacked. It cannot be done after the fact and would require a re-install or crx2oak migration process (Sidegrade) if AEM is installed with a combined Repository. Rackspace leverages our own in-house custom built automation to complete our AEM builds and default to external Datastore along with many other options for very high performance.
If you would prefer not to worry about whether your storage is up to speed, let Rackspace Application Services take care of managing your application platform for you — we will provision and manage your public or private cloud AEM infrastructure and configure the application platform for very high performance while providing production-grade security and management and optimizing costs.