What did I do?
1. High-Volume Search and Real-Time Aggregation
Business Problem
Ascenda often needs to handle large numbers of requests for searching and retrieving information from various third-party data suppliers. The challenge lies in efficiently querying multiple external sources, combining results, and ensuring users receive up-to-date information in real time. Since customers expect rapid responses, any delay in aggregating supplier data can degrade user experience and reduce overall satisfaction.
Overview of the Solution
To address this high-volume, real-time requirement, I designed a system capable of managing concurrent requests while aggregating data from multiple suppliers. Leveraging the concepts of concurrency and the actor model, each incoming request is routed to a lightweight process that communicates with a corresponding external data supplier. These processes run in parallel (parallelism), significantly reducing the latency for end users.
<incoming-diagram>
Once the data is retrieved, I use data transformation techniques to merge and normalize the disparate supplier responses into a single coherent result set. This step involves filtering duplicates, selecting optimal rates, and broadcasting updates using a publish-subscribe mechanism whenever new or updated information becomes available. By focusing on scalable concurrency patterns and clear data transformation logic, the system can handle tens of thousands of queries daily with consistent performance.
2. Caching and Data Retrieval
Business Problem
Organizations handling large data sets from external sources often face performance bottlenecks due to repeated queries to the same information. In scenarios where data (such as location or reference data) remains valid for a certain period, continuously requesting the same content leads to high latency and resource usage. The business challenge is to serve these frequently accessed data sets more efficiently, reducing the response time for end users and alleviating the load on back-end systems.
Overview of the Solution
To tackle this challenge, I introduced a caching layer with a time-based eviction mechanism (time-to-live, or TTL) to temporarily store frequently accessed information. Whenever a request for data arrives, the system first checks the cache; if the data is found and is still valid, it is returned immediately without querying the external source. This approach—supported by caching algorithms and the core principles of distributed systems—significantly cuts down on redundant requests. By maintaining a TTL, the system ensures data is refreshed regularly, minimizing the risk of serving stale information.
<incoming-diagram>
This solution effectively handled thousands of daily requests while reducing search latency from over four seconds to about two seconds for high-volume data sets. The result is a smoother user experience, lower infrastructure costs, and improved scalability.
3. User Session Management and Rate Limiting
Business Problem
In a user-facing service where individuals manage time-sensitive tasks—such as booking or cancelling reservations—secure and reliable user session handling is crucial. The system must ensure that only valid sessions can make changes to bookings, while also guarding against malicious actors who might attempt to brute force authentication endpoints. Failing to manage sessions securely can lead to unauthorized access, while a lack of rate limiting can invite abuses that degrade the overall service.
Overview of the Solution
To address these security and usability requirements, I implemented RESTful API endpoints that authenticate users by validating session identifiers stored in secure cookies. Each authenticated session expires after a set time window to prevent unauthorized reuse. This approach is grounded in the principles of session management, ensuring that the session data is validated against a trusted database. Users can thus seamlessly log in, view, and cancel bookings within the valid session duration.
<incoming-diagram>
Furthermore, I introduced rate limiting to cap the number of consecutive failed logins. This was done by monitoring failed attempts in real time, storing counters in a shared data store, and returning standardized response codes when the threshold was exceeded. By leveraging a token bucket or a leaky bucket algorithm (common approaches to rate limiting), the system deters brute force attacks and maintains stability. These steps help maintain both security and privacy, ensuring users can manage their bookings without risking unauthorized access.
4. Batch Processing and Data Indexing
Business Problem
In large-scale systems handling hundreds of thousands of data records—such as travel destinations or geographic data—sequential processing quickly becomes a bottleneck. Businesses need to efficiently integrate updates, correct inaccurate information, and index records in external search engines without incurring prohibitive delays. Achieving both speed and accuracy is paramount to ensure a seamless user experience when searching or browsing through these extensive data sets.
Overview of the Solution
To address these challenges, I implemented a batch processing framework that breaks massive data sets into smaller, more manageable chunks. This approach leverages concurrent data updates, wherein multiple workers process chunks of data simultaneously, each responsible for tasks like performing external API calls for data corrections or indexing records in a search service. By distributing the workload, the system achieves scalability, handling thousands to tens of thousands of data operations per minute.
<incoming-diagram>
Throughout the process, I utilized data structures optimized for parallel operations and fine-grained concurrency controls. This ensures data integrity—preventing collisions or partial writes—while maximizing throughput. The result is a robust workflow capable of indexing vast amounts of information quickly and accurately, as well as automatically correcting errors (e.g., boundary coordinates) with minimal manual intervention.
5. Improved Page Rendering and Response Optimization
Business Problem
In dynamic web applications, delivering content quickly and efficiently is a major concern. Users often abandon pages if loading times are slow, particularly when dealing with large data sets like hotel inventories or booking details. The business challenge, therefore, is to reduce the time-to-first-byte (TTFB) and speed up initial page load for both signed-in users (where data can be personalized) and anonymous visitors (where read-only or frequently accessed pages can be precomputed).
Overview of the Solution
To tackle these performance requirements, I implemented efficient rendering techniques that involve both Server-Side Rendering (SSR) and Static Site Generation (SSG)—two approaches that drastically lower page load times. For dynamic or personalized pages (e.g., booking details), SSR was employed to render the content on the server before sending it to the client. This resulted in a reduction of TTFB from over 600 milliseconds to under 300 milliseconds, thereby enhancing the perceived performance for millions of users.
<incoming-diagram>
Simultaneously, for frequently accessed but less dynamic content—like hotel search result pages—SSG was used to pre-generate HTML at build time, along with caching any large data sets. This methodology cut first-load times from around 1.2 seconds to under 400 milliseconds for anonymous users. Finally, to maintain a smooth user experience (UX), a reusable progress component was introduced, providing real-time feedback during data fetch operations. This progress indicator reduces user uncertainty and underscores the responsiveness of the application.
6. Booking Processes and Cancellations
Business Problem
Travel-related services often require users to manage their bookings in real time—be it modifying existing details or cancelling them entirely. The key challenge is to provide a straightforward, secure, and efficient workflow for cancellation. High user volumes can lead to delays, especially if each request is processed synchronously, potentially degrading the user experience.
Overview of the Solution
To meet these needs, I developed a fully end-to-end booking cancellation flow where users provide a unique booking reference and personal identifier (e.g., last name) to securely retrieve their existing reservations. Leveraging asynchronous communication, the system can handle multiple cancellation or modification requests in parallel, ensuring the application remains responsive despite large user traffic.
<incoming-diagram>
This setup also employs robust validation, verifying that a user’s submitted booking reference and credentials match existing records. Immediate feedback is given on whether a booking is successfully cancelled or if there is an error—enhancing transparency and user confidence. The result is a seamless cancellation process that can effectively scale to accommodate hundreds of thousands of potential users.