The digital world is constantly pushing boundaries for accessing richer content at an ever-increasing rate with near-zero latencies being demanded. Delivering this business demand to maintain or improve customer satisfaction amplifies the expectations placed on infrastructure and application architectures.

Diving into how latencies impact real world customer experiences, let’s briefly explore some aggregated research by Hubspot. In short, the research highlights that optimising page-load times improve user experience (UX), conversion rates, and finally sales revenue.

  • The first five seconds of page-load time have the highest impact on conversion rates.
  • Website conversion rates drop by an average of 4.42% with each additional second of load time (between seconds 0-5)
  • As page load time goes from one second to 10 seconds, the probability of a mobile site visitor bouncing increases 123%

Presented with this user experience vs. latencies dilemma, Australian consumers and businesses have always faced this challenge when pushing the boundaries of low latency Internet services. With Australia fitting snuggly into the USA or over a big portion of Europe, the vast distances involved is great for week long road-trips, however not optimal for delivering a fantastic user experience.

User Experience AWS.png

The launch and explosion of Public Cloud providers exacerbated this challenge, as locally placed datacentres became less favourable modern architectures. This somewhat interesting challenge became a reality for business that wanted to adopt public cloud services, whilst operating primarily, or have a large customer base across states, territories, or neighbouring countries (not situated in close proximity to public cloud provider datacentres)

Another interesting challenge for medium and larger Australian organisations, especially those who offer Software-as-a-Service products is that a large portion of the user base may be situated in New Zealand.

table 1.png

Content Delivery Network (CDN) or Amazon CloudFront probably crossed the mind whilst reading this post. For the uninitiated in the inner workings of a CDN, a CDN is briefly defined as follows:

A CDN refers to a geographically distributed group of servers which work together to provide fast delivery of Internet content. A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, javascript files, stylesheets, images, and videos.

In summary, a CDN offers an approach of caching static data and assets closer to the user location, lowering the delivery (the latency) of rich content, commonly representing “bigger” files and JavaScript assets. A delay here most often than not results in a sub-optimal user experience. Various web application software architectures have been positioned over the last couple of years, from Single Page Applications (SPA) to Progressive Web Apps (PWA) – all with a rich-low-latency user experience in mind.

A drawback of relying solely on a CDN like Amazon CloudFront to optimise the user experience lies in the word “static” data or assets. When a rich user experience relies on personalisation at the individual user level, then API-traffic and database transactions to generate the personalised experience remains a key facet in the user-experience-to-latency ratio. This is especially true for Software-as-a-Service products, where basically every click-of-the-mouse is a latency sensitive request-and-response process to the origin server. Caching at this level of the domain is not easily achieved within the context of a CDN – strategies and case-by-case exceptions exist, but not common practice. Implementation of WebSocket based APIs is a topic for another day, but also a consideration, however not a very common practice.

In 2017 and then in 2021, AWS introduced Lambda@Edge and CloudFront Functions respectively, a great solution to dynamic requirements in a static cache-service such our trusty CDN. Several other use cases also came to light following the introduction of this at-the-edge processing capability, however true dynamic content with limited regional presence remained somewhat challenging to achieve in the quest for zero-latency user experiences.

Amazon CloudFront offers a good spread of edge locations across the Australian and New Zealand continents. The overlay map below highlights locations within Sydney, Melbourne, Perth, and Auckland. To a great degree these edge locations resolve a good portion of the latency challenges for static content to improve a user experience should users be based in and around Melbourne, Perth, or Auckland. More information on the edge location types can be found here.

User Experience AWS (1).png

The challenge with true dynamic data driven interfaces, those that drive personalised results and business services (Software as a Service / SaaS solutions) are that CDNs are very ineffective at granular (per user, per session) results – especially where similar requests are very infrequent. You can still cache it but would rarely serve the purpose it is intended for and may raise your CDN cost.

So where does this leave the end-user experience in the quest for a zero-latency in a modern Public Cloud landscape.

Welcome to AWS Local Zones – in December 2019 the first AWS Local Zone was introduced in Los Angeles. A Local Zone extends existing AWS regions closer to end-users, providing single-digit millisecond latency to a handful of popular AWS services in the zone. Local Zones are attached to a parent region and access to services and resources is performed through the parent region’s endpoints and is as simple as defining a subnet in the Local Zone and incorporating that subnet in the Virtual Private Cloud (VPC) routing configuration. A major benefit of this VPC extension into a Local Zone is that all services (applications) deployed in the Local Zone can seamlessly and transparently access any AWS service in the parent region via Amazon’s redundant and high bandwidth private network backbone.

Local Zone Fact Sheet

Local Zones offer a huge advantage to bring true compute closer to the end-user, may it be the customer or business user, however there are some facts and limitations you need to consider when opting for a Local Zone.

  • Features – Not all Local Zones are created equal – some offer bare bones AWS services and others offer a well-established mini-compute-region.
  • Purchasing model – All compute is on-demand or Spot, however, to deliver on the Cost Optimisation pillar, Savings Plans can be purchased. Reserved instances are not available in Local Zones.
  • Pricing differences– EC2 instances and other services have a higher price in comparison to the parent region. The higher price-point is not excessive, and the business benefits should easily justify this “premium”. A simple comparison between parent region and local zone for an on-demand EC2 m5.xlarge – $0.192 vs. $0.230 per hour, equating to a US$27.36 premium for a full month.
  • Monitoring and Automation – AWS CloudFormation, Amazon CloudWatch, AWS CloudTrail, and others work seamlessly with Local Zones.
Local Zone Features

The good, bad, and ugly of the AWS Local Zone announcements for Asia Pacific are very briefly described as follows:

  • The good news is that Local Zones offer a good spread of very common and popular AWS compute services to deploy close to the end-user, with many locations to choose from.
  • The bad news is that till date, the Australian and New Zealand Local Zone locations have only been announced, so no chequered flag just yet. Exact dates for each coming online are not yet known, however I could speculate that 2023 – 2024 are good bets.
  • The ugly is that from all the currently launched single digit latency zones, only one Local Zone offers Amazon ElastiCache and Amazon RDS (and a goody-bag full of other services)

Amazon Aurora does not support deployment in Local Zones and based on the Aurora architecture the possibility of this being a future option is unlikely. Further, it seems likely that a requirement for services like Amazon ElastiCache and Amazon RDS in a Local Zone is dependent on at least two “zones segments” that make up the Local Zone footprint – as with the Los Angeles Local Zone architecture, offering us-west-2-lax-1a and us-west-2-lax-1b. The table below highlights Local Zone features and services as at 04.2022. With a well-educated guess being that the Asia Pacific Local Zones will not launch with a full array of services.

table 2.png

Use cases

Several use cases have been positioned for Local Zones since its launch – to a degree most positioned use cases are industry specific and advanced compared to most organisations’ operational and business process requirements. The most common use cases highlighted for local zones are using the EC2 instances to host artist workstations for graphic intensive workloads, online gaming, financial transaction processing closer to stock exchanges, and machine learning inferencing for real-time decision support processes. Applications of this nature benefit from the extremely low latency made possible by the geographic proximity offered by Local Zones.

Within the Asia Pacific regions and geographic positioning of Local Zones between Australia and New Zealand the use of a Local Zone opens possibilities to a new use case. Delivering services to a new geographic region (New Zealand) from Australia, without the need to setup a full ecosystem of resources, networking, and security inside your AWS account in a brand-new-region once the Auckland regions comes online. This use case will drastically lower the complexity of some small-to-medium enterprises wishing to offer their customers an improved user experience through near-zero latencies.

Regulatory compliance has been thrown into the Local Zones use case basket for which it may be plausible, however the specific data or workload compliance requirements would need careful consideration and support from the security or compliance team, as the network and primary AWS components still operate under the primary region.

Once the Australia and New Zealand Local Zones come online, we are looking at three zones, pinned across two regions. The map overlay below offers a glimpse of each Local Zone and the parent region each are pinned to. Between Australia and New Zealand AWS announced the establishment of three Local Zones:

table 3.png

User Experience AWS (2).png AP-SOUTHEAST-2 (Sydney) and AP-SOUTHEAST-4 (Melbourne) Local Zones

The Australian continent Local Zones offer do leave a few interesting questions or stones unturned. Although a welcome network decision from AWS is that both the Brisbane and Auckland zones are honed to the Sydney region, with Perth to be joined to the future Melbourne region. The configuration makes a simple Perth presence (via a local zone) from the predominant Australian used Sydney region a slightly more complicated networking exercise than truly needed by most small to medium businesses. Perhaps AWS should consider a side-by-side local zone in Perth, with Sydney as the parent region (ap-southeast-2-per-1a) #aws-product-feature-request.

To a great degree introducing a well-defined Local Zone architecture for Australian and New Zealand business operations may resolve a good portion of the latency challenges for a generic user experience uplift where users are based in Brisbane and Auckland, (and Perth), whilst operating from the AWS Sydney region.

Local Zones in action

Setting up a local zone is accomplished by following a few easy steps. Here’s what you need to do:

  • Enable (Opt-In) to the local zone (via VPC console)
  • Create a new VPC subnet that will be dedicated to the Local Zone (and routing as needed)
  • Launch EC2 instances and deploy your application

User Experience AWS (3).png Enabling Los Angeles Local Zone

However, is it as simple as “deploy your application in the local zone” – especially where you have a user base that is geographically separated and a single workload/application. Serving users where they are is the intended purpose ultimately. Let’s explore the deeper dive facets that needs to be considered for a common http/web facing application serving your customer base today:

  • Application Architecture

  • Will each local zone operate an isolated workload/application, serving that specific geographic user-base? (separate app instance, separate database)

  • Will the workload/application run in the zone and connect back to the primary region for database queries?

  • Could the application cache some or all data locally in the zone, without the cache becoming stale? Can a write-through cache be implemented?

  • To deliver a true low-latency consumer experience, caching strategies will be a critical architectural principle to weave into the mix.

  • Domain names and DNS

  • How will users in each geographic region be routed to the appropriate Local Zone or the Primary Region?

  • Will each region/zone operate on a separate DNS, for example a web-workload running in Sydney on myapp.com.au and the Auckland Local Zone on myapp.co.nz

  • Where would this leave a Brisbane deployed workload/application?

  • Database

  • Aligned with the application architecture detailed above, the location of the database will make or break our quest for zero-latency, especially if the application requires frequent data reads.

  • A read-replica database within the Local Zone will position us so much closer to our quest, however can this be done based on the application and technology architecture we are working with or planning for?

DNS Options Perspective

Route 53 offers geoproximity based routing policies that aids in routing requests to the most appropriate private server, AWS region, or AWS local zone based on the user’s location. Actual mileage may vary, and some further tweaking would be needed for specific use cases, however it offers a powerful mechanism of operating the application under one domain name.

User Experience AWS (4).png R53 – Geoproximity Policies (note that not all Regions and Local Zones depicted are available as at 04.2022)

Database Read Replica Perspective

A common solution in the AWS playbook may be Amazon RDS read replicas, however this is currently only a viable option for the Los Angeles Local Zone. Below is a simplistic view of an Amazon RDS deployment, configured with a Primary (Reader/Writer) in the primary US West (Oregon) region, and a read-replica in the Los Angeles Local Zone. This configuration offers us a perfect and fully managed solution to bring read-queries closer to the user.

User Experience AWS (5).png Amazon RDS Read Replica within Local Zone (LAX)

There is a slight chance that RDS may not be a common Local Zone service for many months or years to come – so where does this leave the architecture to support our quest? There is a solution, not out of the box, but a solution, nonetheless.

With a little bit of effort, you can deploy your own EC2-based MySQL or PostgreSQL servers in the Local Zone and synchronize it from your Amazon RDS instances deployed in the primary region – this will basically provide you with a Read-Replica in one or more Local Zones. These EC2-based read-replicas will not be fully managed and may not offer out of the box high-availability and automatic DNS failovers, but with some smart application logic, failing read-traffic over to the primary region Amazon RDS servers may tick most non-functional requirements boxes.

User Experience AWS (6).png Amazon Aurora is unlikely to be supported in Local Zones (author assumption), however the same replication technology/process used to replicate Amazon RDS MySQL to EC2-MySQL can also be utilised to replicate Amazon Aurora MySQL to EC2-MySQL.

Network Perspective

If starting out today with a new AWS presence, think about the possible future VPC-layout to keep the VPC construct simpler offering a range of IPs to extend into one-or-more Local Zones. This keeps each workload, intra-workload and supporting component communication simple to define, implement, and maintain once Local Zones become an Australian reality. Below is an example of a very flexible, easy to configure and adapt VPC-layout. The layout offers a somewhat reserved sizing (/20) with 3-Availability-Zone subnetting (CIDR – 10.100.0.0/20)

table 4.png

Another option not explored in this article as an alternative is a dedicated local-zones specific VPC. Peering this dedicated-VPC with your primary workload, adjust some routing and you are off to the races – this may be a pattern for organisations that have an existing VPC and need to explore alternatives.

When operating an AWS Transit Gateway and planning for a Local Zone expansion, this deployment does require special attention. You can’t create a transit gateway attachment for a subnet in a Local Zone, however configuring the attachment within the primary region subnet and configuring the necessary routing will offer the desired network results. Please see the documentation for details.

Conclusion

Local Zones will offer organisations and product teams a leap into the quest to deliver near-zero-latency services. This is especially true when considering the low barrier to entry into Local Zone powered applications/workloads. A key factor that may place it out of reach for everyone is Amazon RDS support, however with a bit of effort, database read-replicas are not impossible to provision and operate.

A great take-away from the non-scientific research is that enterprises situated in New Zealand have a much better experience than their Perth based counterparts. A few great pointers that can be used to substantiate a business case to move to the AWS Sydney region sooner rather than later, and not deferring the move until the New Zealand region is due to open in 2024. The better latency between Auckland and Perth is probably not the best rationale for a datacentre migration business case, but worth a good laugh in the boardroom at least.