Multi-Tier Web Application
Building a multi-tier web application with scalability and resiliency!
SYMPOSIUM OF SERVICES
9/29/20236 min read
This project made by Adrian Cantrill was one of the earliest projects I completed. It exposed me to both the theoretical and practical application of AWS’ core concepts surrounding the three-tier web application. I want to structure this blog by providing an outline of the project. I hope to also provide a theoretical understanding of the core concepts underpinning resiliency and scalability.
This project was to configure Wordpress, producing the building blocks for a cloud-based web application. The goal? Developing a scalable and resilient architecture organised into ‘architectural evolutions’.
As per any multi-tier web application, a Database (DB) component existed within this architecture. During the First Evolution, Wordpress was configured within the EC2 Instance through a series of manual code insertions. Within the EC2s, MariaDB had also been installed. In order to simplify the scripting process, Parameter Store was used to variabilize key data being configured for our application. This included (but was not limited to) DB credentials and the DB Endpoint which Wordpress needs in order to communicate to MariaDB when handling user requests.
Already we can see two issues which would hamper the scalability of this application. Firstly, as both the web and DB-tier are configured within the same EC2 any attempt at scaling the latter would risk data loss. Secondly, as the IP address of the EC2 is hard-coded into MariaDB, if the EC2 were to have its IP changed (due to a stop / start) then the web-tier would no longer be able to communicate with the DB-tier.
These problems helped me understand the importance of logically separating (or compartmentalising) the components within our multi-tier application. In doing so, the different services are able to scale according to demand without affecting their counterparts.
The Second Evolution involved the utilisation of Launch Templates (LTs). LTs are significant in several ways. User Data can be inserted into LTs which can significantly reduce the build time for our compute instances. Secondly, although LTs are not editable, they allow versioning. This enables the provisioning of new LTs with updated User Data amongst other configurations. Given the time it takes to configure Wordpress and MariaDB within our EC2s this is quite significant.
At this stage, customers are still accessing the webpage locally so health checks can’t be performed and as the content featured within our Wordpress page is stored locally, scaling is still a concern. The primary question being asked might be, why? If horizontal scaling were implemented, data consistency would be an issue as older EC2s may have outdated data compared to their newly created counterparts. Secondly, if users are redirected to a different EC2 (due to traffic distribution or a crash) then session data may be lost. At this stage, whilst we have automated the provisioning of our web and DB-tier, we must still solve the issue of storage and scalability.
The Third Evolution sought to tackle the issues surrounding the architecture of having the web-tier and DB-tier within the same instance. Taking advantage of RDS’ capabilities was the best course of action. To do so, we first provisioned the DB Subnet Group (this included the choosing 3 AZs and subnets created via CloudFormation). After provisioning our RDS and DB Subnet, we exported the contents of our MariaDB over to our MySQL RDS and began pointing our RDS Endpoint (within the Parameter Store) from our MariaDB over to our RDS DB. As a result, we launched a new version of our LT which removed installing MariaDB as a prerequisite for our Instance. By now, we’ve utilised the AWS RDS and created a logical compartmentalisation of our web and DB tier.
The Fourth Evolution aimed at utilising Amazon EFS to ensure cross-instance replication of storage, neutralising the issue surrounding separate session data within each EC2. Amazon EFS operates as a file storage system which stores your encrypted data into files. Mount Targets (MT) are the network endpoints of an EFS that are available inside VPCs. Our EC2s will connect to the MTs, providing access to EFS. Resultantly, both EFS and our DB now exist as separate entities to our EC2. Scaling is now possible. Of note however, is that customers (in order to access Wordpress) will have to go through the EC2s. This is where we use abstraction of communication that is to say, ensuring our users are unable to differentiate between the EC2s they're using to access Wordpress.
The Fifth Evolution involves supplying our multi-tier application with its most important aspects - the Elastic Load Balancer (ELB) and Auto-Scaling Group (ASG). The ELB acts as a way of distributing traffic across several EC2s through its nodes. Nodes are configured within different AZs to allow for greater traffic distribution. During its provisioning, an ELB is configured with a DNS Name that is distributed to its nodes. By adjusting the Listener Configuration of the nodes, we can determine what ports and protocols are accepted.
Our EC2s are now assigned with the DNS name of the ELB. If they are terminated or stopped, the IP addresses won’t affect the web application itself. To make this change, we created a final version of our LT ensuring that within the User Data script, we were pointing to the DNS Name rather than the hard-coded IP address. Integrated with ELB is the ASG which links to the TGs. This ensures that when an ASG terminates an EC2 (due to failing a health check) the EC2 related to the TG is also removed! Further to this is the addition of a dynamic scaling policy vis-a-vis CloudWatch. Using CloudWatch’s metric filters, when an alarm is triggered (for example CPU utilisation > 40%) it can either scale up or down the number of instances.
Throughout this process, an abstraction of communication occurs. Customers are provided with a single entry point to the website - the DNS name vis-a-vis the ELB nodes. Simultaneously, the ELB also seamlessly works with the ASG so that in the event of an instance termination the customer is not affected by this, they are simply redirected to another Instance where session data is already stored due to our EFS. And viola, we have a multi-tier web application!
Understanding the architectural framework of this multi-tier web application was incredibly difficult at first. There were many things to consider. ELBs, their nodes and the way in which these nodes are used to distribute traffic across my instances. Abstraction of Communication is also another key concept (and one of confusion!). The idea that ELBs allow each tier to scale (and work) independently, and (through cross-zone load balancing) even distribution for traffic is made possible. As a result; I know the difference between ALBs, NLBs and their use case, how to utilise LTs and how dynamic scaling policies can be used for scaling purposes.
This entire project has allowed me to gain a deeper understanding of scalable architecture and the key concepts surrounding a multi-tier application. For those who are hoping to delve deeper into understanding ASGs, ELBs, EFS and RDS I would highly recommend this project!
If you want to try out this mini project yourself, then you can find the link to it here.
Fifth Architectural Evolution
Fourth Architectural Evolution
Third Architectural Evolution
Second Architectural Evolution
First Architectural Evolution
Concluding Remarks
Introduction
Source: AWS 3-Tier Architecture