Using Collections for Handling Web-Hooks with App Script
3 min readJul 16, 2025
Ok, we know already, the best way of cooking WebHooks is using a library and a Web-App as separate files:
Web-Hooks
- Web-Hook (deploy as WebApp) https://github.com/Max-Makhrov/GoogleSheets/blob/master/utils/webhook.gs
- Library (deploy as Library and use in WebApp↑) https://github.com/Max-Makhrov/GoogleSheets/blob/master/utils/webhookLib.gs
Next, we face the real-world scenario. Webhooks should work quickly, and we know that Apps Script is 🐌. And limited.
Idea (not new): Cache Data, Then Process with a Triggered Function
1. Webhook (doPost → in our case, “Library”) Action:
- Receives the incoming data from the service (e.g., Monday.com).
- Immediately writes the raw payload to
CacheService. - Instantly returns a
200 OKsuccess response.
2. Time-Driven Trigger (e.g., every 1 minute):
- A separate function runs automatically on a schedule.
- It uses
LockServiceto prevent multiple simultaneous executions (e.g., if the trigger fires again before the last run is finished). - It retrieves all the cached data.
- It processes the data (e.g., writes to a Google Sheet, calls another API).
- It clears the processed data from the cache.
Advantages of Your Approach:
- Fast Webhook Response: This is the biggest win. Your
doPostfunction becomes incredibly fast because all it does is a quick write to the cache. This prevents timeouts from the sending service, which often expects a response within a few seconds. - Handles Long-Running Processes: If processing the data takes more than the 30-second execution limit for a simple trigger or web app, this pattern is essential. The time-driven trigger gets its own 6-minute execution window (or we know how to bypass this limit as well :).
- Manages API Rate Limits: By processing data in batches, you can better control the rate at which you call other APIs (like writing to a Google Sheet or calling another service), helping you stay within their rate limits.
- Prevents Concurrency Issues: Using
LockServiceis crucial and correct. It ensures that if your processing takes longer than your trigger interval (e.g., processing takes 90 seconds, but the trigger runs every minute), you don't have two instances running at the same time trying to process the same data.
Potential Considerations and Downsides:
CacheServiceis Not Permanent: We don’t care since we run a trigger often.- Complexity: This architecture is more complex to set up and debug than a simple, direct processing model. Especially cashing. We’ll show an example later.
- Data Ordering: If the order in which webhooks are processed is critical, you need to add a timestamp to your cached data and sort it before processing. Not in my case, so I ignore this.
- Error Handling: What happens if it fails to process a batch of data? Easy, we will delete the task from storage only on
success.
Storing Tasks
Now the fun part. Here we had to store tasks as a collection, delete tasks, and read tasks. And all that with not-perfect Cache Service. Now you don’t need to worry, use this store solution:
Boom!
Happy coding!
