By [email protected]_84Posted on February 17, 2022 Table of Contents Key Takeaways Moving to Edge would become inevitable for enterprises across industries to address data localization, privacy laws, and optimization for performance. Data Pattern on Edge walks through how edge can integrate and coexist in your existing ecosystem. Industry tested scalable data pattern allows various complex use cases to be onboarded to Edge Open Source technologies promise a cost-efficient alternative rather than the overtime costly cloud models. Expand your data horizon to the edge for performant user experiences with lower latency budget. With growing competition to get data that power experiences to end user closer and closer and advent of local data privacy laws, let’s look at different enterprise data patterns like “synchronous data retrieval” , “subsequent data retrieval” and “prefetch data retrieval” on data center and how the data can be ported to edge without the complexities of the cloning the entire architecture as is to data center, still having control plane in your control without being blind spot on what is on edge. Enterprise User Experience Data Quick glimpse of existing data at an enterprise level as shown in the figure above where in Service A is an abstract representation of multiple services, Service B representation of the service tier accessing the data local to the data center and Service C abstract to all services accessing the data from external 3rd party providers. The data retrieval across these services can internally be classified as: Synchronous Data Retrieval. Here all the data for the user is retrieved in a single or parent request, this could be the initial html payload call or service calls from the native mobile application. An example of this would be any transactional experience or personalized user experiences: an enhanced version is data chunked progressively or paginated to the end user. Subsequent Data Retrieval. In this case, the initial critical data is retrieved, and subsequent data is retrieved over Asynchronous calls, an example would be below fold recommendations (wherein the required content is made available to the end user on scroll or content below the initial screen resolution ) or advertisements or game tiles. Prefetch Data Retrieval. In the third scenario, based on predictive statistics or ranking or workflow, the user engagement time is taken advantage of for prefetching or loading the data, this could be media resources or templates or personalized data. For someone who is not familiar on edge computing, anything outside your data center can be considered as edge of the network given the SDN is closer to end user and you, or your organization qualify for edge computing i.e. if the compute is interaction between the data center and edge components unlike the local computing where in pretty much the computing is done toward the edge nodes of the cluster. Even before we deep dive into data patterns at the edge and edge computing, the industry norm has been moving to static data set to the user devices or the browser, thereby resulting in overhead of the user resources and bandwidth and at the same time losing control mechanism on the data made available at some point of time. What we try to understand here is how edge can keep the traditional control knobs or semantics to the engineers at the data center, engineers at edge and make the user not pay the penalty of your optimization. In short, we are targeting to expand the data horizon from D space to E space as shown in Figure 1. Which bring in impediments associated to Limited Infra Capex Control Mechanism Fail Over Observability Data Purging Experimentation Traffic Routing and Ramp Up Legacy Support The article aims to walk through different data patterns and address associated problem sets and give some insights on what technologies can be leveraged to scale for enterprise data sets. Edge Data Pattern Let’s discuss the major three data patterns toward edge computing and user optimization. Synchronous Data Retrieval Most of the data in the internet space fall into this bucket, the enterprise data set comprises many interdependent services working in a hierarchical nature to extract required data sets which could be personalized or generic in format. The feasibility of moving this data to edge traditionally was limited to supporting static resources or header data set or media files to edge or the CDN, however the base data set was pretty much retrieved from the source DC or the cloud provider. When you look at the User experiences, the optimization surrounds the principles of Critical rendering path and associated improvement in the navigation timelines for web-based experiences and around how much of the view model is offloaded to the app binary in device experiences. In hybrid experiences, the state model is updated periodically from the server push or poll. The use case in discussion is how we can enable data retrieval from the edge for data sets that are personalized. Pattern The user experience data is segmented as non-user context (NC) and user context information (UC), where in non-user context data uphold information which is generic across users or experiences and user context is specific to the device or user in discussion. The non-user context data is stored, invalidated, purged on the fly and associated user context information is stitched or tailored during the parent request lifecycle. The design principle is that introducing edge should not cripple the network and the domain teams to implement their respective control and optimization knobs. Components Let’s discuss the data center components first as many of the teams would already be having the data center services and it is a good vantage point to start with. Data Center Components Service A Typically, this would be the front-end tier within the enterprise and need to address few changes to support the pattern above , firstly the tier need to propagate the edge side flow identifier header to the underlying service , handle the cookie management , determine and set the necessary cache control values like to cache or not to cache , if caching for how long generally leveraging the standard Cache-Control directives , also be accountable for removing any of global header or universal modules which are dynamic and reinsert the same as part of the user context response payload. The front-end tier is expected to address below four primary flows as far as the Edge ecosystem is considered: The regular traffic (non -POP) flow The regular experimentation controls traffic. The cache creates/writes calls which in this case is the non-user context requests. The user context request. Service B This tier accounts for segmenting the data based as user and non-user context made available over a module provider group like CACHEABLE for business team to control on what segments of data need to be cached and what not, precursor by the ValidationViewModel which allows to impose rules and restriction on restricted data set that need not be cached. The experimentation and tracking are addressed at this tier for business impact analytics. Based on the edge flow identifier headers the decision is made at this tier to address the desired data responses, associated tracking for analytics and color the data response with desired cache ttl values and revision ID for purge and pull there by giving the control to the business team to still manage the data even on moving to edge of POP ecosystem. Edge Components As you would have observed above, the entry point for the data center is to behave differently based on different headers propagated from the edge and have different actions based on the route in discussion. At the same time make sure the edge data store is compatible with the verb of browser-based cache directives. To accommodate this on the edge or POP we need to have a scalable software load balancer like Envoy which has powerful discovery services across primarily cluster, route , listeners , secrets and capability to add custom filters. As far as the edge data store cluster is considered, given the caveat on data store to understand the browser cache derivatives, for us ATS was a favorable and scalable option. SLB As shown in the diagram below, the software load balancer accounts for below steps of operation Handling incoming user requests Asking on the business specific Service E on the possible cache key Retrieve the non-user context data to end user if cache-hit Consistent Hashing based data writes to Edge data store if cache-miss for future invocation Stitch the user context in the parent request lifecycle. Cache invalidation on updates or revisionID changes. Service E This service is essential on the edge, which allows business or domain teams to feed in the necessary knowledge engineering associated with the data set in discussion. If your data or experiences are international in nature, service E accounts for handling different locales accordingly, identification of the user part of the experimentation group, custom logic based on query parameters, device detection to allow or not to allow caching patterns, primarily generating the cache key for the user request. Edge Data Store The edge data store cluster primarily needs to address the cache purge mechanism based on the TTL value set and do necessary invocation to the data center to retrieve the new data set and not cache data set even if an attempt is made to when the cache control values are set accordingly. In our use case given ATS does invocation to the origin for every individual request, the ATS data store was onboarded with a custom plugin to do only when the request has the cache-key header value thereby allowing the SLB to allow what and when to cache the data set. Conclusion Given above approach ensure addressing caching the non-personalized (non-user context ) on edge and retrieve the user-context in the request lifecycle allows end to user to see the same non POP experience , however early retrieval of the non-user context to the end user allows the browser to construct DOM , thereby improving the Critical rendering Path performance and allow future user data to seep in to the browser rendering tier for better user perception. In our high-volume traffic pages, we were able to bring down the latency from > 1500ms to < 700ms.With the added advantage of handling bot traffic, centralized observability on global traffic , efficient correlation with traffic to business. Subsequent Data Retrieval Unlike the synchronous data retrieval, where in the power of caching is more centric to repeated calls to the data content for higher cache-hit , this may not be suitable if you have a long tail access pattern and data retrieved is unique in nature , majority of these scenarios can be addressed for optimization by sending the critical content early and then subsequent data set can be retrieved from the edge as required. This pattern has an added advantage to enable edge data retrieval even when the data is unique and accessed only once, for example advertisements. Let us validate where we are to see latency budget improvement in this pattern by validating generic timelines for content retrieval for user experiences. In the below diagram, as you see when a request is made from the browser(1-2) the initial page content is retrieved with associated identifiers for data content calls to be made in the future(5) (primarily triggered after the page onload events). The time taken on the browser end for DOM construction and network latency is taken as an added advantage for parallelization of calls to retrieve data sets from the Service B tier. As depicted in the diagram above, the potential opportunity of moving to edge is making the content available in the edge for different identifier with lower ttl (short lived transactional data set) to power the end user experiences and if feasible to move the Service B to 3rd party provider closer to the 3rd party provider location for quicker data availability at Service B tier. Pattern In precise the data available at data center from Service B is pushed to the respective Edge cluster as and when available for quicker retrieval and on any miss, fallback to the traditional from the data center pattern. All the read and write is diverted through the same software load balancer for consistent hashing and single point of observability. Components Let’s look at the data center components and the edge components to implement the above pattern in discussion. Data Center Components Service A The Service embeds into the existing architecture pattern to facilitate dynamic identifier creation for placements or page modules on the final response to the end user. In a few of the use cases this tier can accommodate user information to customize the page module responses that are relevant for the end user. Also mandates the propagation of the edge flow identifier headers to downstream. Service B The Service B abstraction accommodates data retrieval from 3rd party systems or bidding engines. The mandate here is to have parallelized, FIFO support data retrieval. Service C The Service C accommodates the data content written and read from the data center. Service E The Service E embeds edge insights to the business teams, to make data available local to the end user in discussion, the tier accommodates replication and data availability based on the parent original request. Edge Components SLB In this pattern the Software load balancer accounts for Handling No-Content scenarios with response HTTP 204. Addressing preflight request and response headers for early determination of allowed domains. Propagation of POP cluster Info for upstream services to determine session affinity for future calls. POST to PUSH translation for Edge Data Store (ATS) Handling Observability for cache-hit, cache-miss, write, read paths, data size. For example, in the below scenario where each content is unique, the hit ratio on edge was > 82% with write routed to the parent request POP and not replicated to other POP in the region. However, when the business team reduced the push of the data towards the Edge, the same was evident in the edge side response code for anomaly detection systems to pick it up. Edge Data Store In this scenario, we are to leverage the ATS in PUSH mode to store the data content of interest with associated TTL values, the key point is the write is made through POST or PUT from the data center, however the same is translated to ATS supported PUSH method to drop HTTP 201 on insertion and HTTP 200 on duplication and HTTP 204 during content miss scenarios. Conclusion The above approach opens up opportunities for moving data to edge even if the data set is accessed or used only once (short lived transactional records) , also in scenarios where the user determination is not feasible like guest or new users to the system. Prefetch Data Retrieval In the prefetch scenario, the focus is to have the next deterministic data set required to be made available. Consider the Service Z in the below diagram, which is the precursor to the page request in action powered by Service A, B or C. During the funnel or workflow based on what is next, the associated (predicted or ranked) data set is prefetched and made available on edge. This in general is applicable to scenarios like loading game tiles, recommendations, top search results, and loading media files. The effectiveness of this pattern is more deterministic to how the associated data set to be cached or stored on edge is decided, however the data being made available shall leverage either the Synchronous Data Retrieval or Subsequent Data Retrieval or Offline data made available. Conclusion We are now moving into an era of data explosion in coming years, majorly attributed to Edge platforms. Only the organizations on Edge can transform into the successive league of innovation around Edge Intelligence. As you see in the above edge data patterns, it’s now feasible for any organization to transform their existing legacy system to take advantage of edge computing. Since we are handling underlying data this is foolproof with the changing technology stack , added to this these data patterns promise localization of data handling privacy laws. DATA Tags: DataEdgePatterns