Services

Resources

Company

🧹

A case of Terabyte scale backup and recovery solution

A case of Terabyte scale backup and recovery solution

A case of Terabyte scale backup and recovery solution

A case of Terabyte scale backup and recovery solution

You are a tech lead who's tasked to backup 1.5TB (yes, not a typo, a terabyte) of data DAILY to S3.

You're not really sure about any other background about this requirement, so you start asking some questions:

  1. Where's the data produced from and stored currently?

  1. What kind of data (format) is it really? What's the access pattern for this data?

  1. What data retention do we need?

  1. Why do we need a backup in the first place? Why S3? Is that already decided?

  1. Are we already on AWS?

  1. Can someone share some business context with me?

  1. What are data restoration parameters - e.g., time to restore, where to store the restored data?

  1. How often does the backup need to run? What about the restoration process? How often do we need to run it?

You have many other questions, but you choose to start with these...

You gather answers to these and more:

Context

The Company Is In The Cybersecurity Domain And Provides Its Clients A "Managed Security Services" Offering. The Data Is Generated By Its Multi-Tenant SIEM Platform Deployed On Private Data Centers. The SIEM Handles 100s Of Tenants And Produces 1.5 TB Of Log Data Daily (Tar.Gz Format). The Infra Consists Of 24 Beefy Machines In Data Centers (DC). There's A Private Link Between The DC And AWS. Every Night, A Cron Job Creates Tar.Gz Files On These 24 Machines.

You find out some more details.

The Average Tar.Gz File Size Per Machine Is 65GB. (24 Machines * ~65GB Avg Tar.Gz File => 1.5TB Data Daily). Each Machine Handles A Few Tenants, And The Tar.Gz File Contains Data For These Tenants. Currently, Tar.Gz Files Are Moved To A 200TB Archival Storage. The Current Archival Process Needs Fixing. The Data Needs To Be Kept For 1 Yr For Compliance Purposes (Total Data Per Year Could Be 700TB, Given The Growth Of The Company). There's Also A Need To Re-Arrange The Data In A Specific Dir Structure Before Gzipping.

With this, you start designing a high-level solution that will be:

  • automated

  • secure

  • performant

  • operationally simple to run and

  • budget-friendly

Exploring the Solution

You also understand more about the SIEM and how it stores this huge data.

You perform a POC to find out whether you can hook into the SIEM to create the required dir structure upfront. This will save a lot of disk IO to read the data, transform it, and write it again to the appropriate format.

With some trial and error, POC is successful. ✅

Since the data is seldom accessed and only used for compliance purposes, you choose the S3 Glacier Deep Archive storage class. This is the cheapest and most reliable way to store the data at this scale.

The challenge is uploading this huge data from the DC to S3 daily.

You test the available bandwidth by uploading large files to S3. This impacts the production network. Luckily, a backup network pipe can be used without affecting the production traffic. 😅

You find out that the 24 VMs are beefy, but they are already serving prod traffic. Running the backup workload causes high CPU utilization and affects prod traffic. So you decide to run the tar.gz operation on a separate central server (in non-prod network)

With these and a few more small POCs in place, you propose this flow for the whole solution.

Solution

  1. create a backup dir structure on each of the 24 VMs via cron job

  1. scp these files to NFS

  1. create tar.gz files on a central server

  1. upload tar.gz to S3

Tech-wise, you choose something simple and boring - bash scripts and cron jobs.

There's some pushback about using bash to do all this work, but you're sure it can work well (and be maintainable).

This is your overall solution:

Image

It's just three bash scripts overall, with some error handling and success/failure notifications. You orchestrate the execution of these scripts such that they work like a data pipeline:

  • scp

  • tar and

  • s3 upload

All these operations run in parallel as much as possible. And it works beautifully on production for months without much oversight! You encounter some edge cases during your beta testing, but it's nothing you can't handle at this point.

The entire solution costs less than a few hundred dollars per month to run.

You cross-check your solution goals:

  • automated ✅ (bash and cron ftw!)

  • secure ✅ (runs in separate VPC)

  • performant ✅ (yesterday's data is uploaded within 10 hours, during non-peak time)

  • operationally simple to run ✅

  • budget-friendly ✅ (less than a few hundred $ monthly)

Lessons

  1. Simple, boring tech works well on prod

  1. When faced with unknowns, form and validate your hypothesis by doing POCs

  1. Understand existing tech and context as much as you can

  1. Build things iteratively instead of in a big-bang way

  1. Be solution-focused, not tech-focused

I write such stories on software engineering. There's no specific frequency as I don't make up these. If you liked this one, you might love - 🏒Not all debugging stories have a happy ending

Follow me on LinkedIn and Twitter for more such stuff.

Subscribe for more such content

Stay updated with the latest insights and best practices in software engineering and site reliability engineering by subscribing to our content.

Subscribe for more such content

Stay updated with the latest insights and best practices in software engineering and site reliability engineering by subscribing to our content.

Subscribe for more such content

Stay updated with the latest insights and best practices in software engineering and site reliability engineering by subscribing to our content.

Subscribe for more such content

Stay updated with the latest insights and best practices in software engineering and site reliability engineering by subscribing to our content.

Subscribe for more such content

Stay updated with the latest insights and best practices in software engineering and site reliability engineering by subscribing to our content.