Arjun Kshirsagar
Backend engineer focused on systems > syntax.
I build APIs, data pipelines, and infra-heavy backend services.
Currently obsessed with databases, distributed systems, and real-world scale.
Current focus: High-performance data ingestion & distributed observability.
Now
- 📍 Based in Bengaluru, India
- 🎯 Goal: Building high-performance data infrastructure
- đź§ Learning: ClickHouse internals & storage engines
- 🏋️ Training: Improving consistency in the gym
Experience
SDE 1 — Wealthy.in
Feb 2025 — Aug 2025
Scaled production reporting systems and backend analytics infra.
Backend Intern — Wealthy.in
2024 — 2025
Worked on production reporting systems, analytics pipelines, and backend services used at scale.
Projects
Anno-tex (2026)
End-to-end CI/CD pipeline for backend annotation services.
Why: Manual deployments were slow and error-prone. A zero-touch Kubernetes deployment system was needed.
- Tech: AWS EKS, Docker, GitHub Actions
- Trade-off: Chose EKS over ECS for finer control over networking and future service-mesh adoption.
- Learned: Managing secrets across environments is the hardest part of CI/CD at scale.
Reporting Microservice @ Wealthy (2024–2025)
Modular generation of complex investment reports.
Why: Monolithic reporting workflows were hitting memory limits and timing out under load.
- Tech: Python, ETL pipelines, Microservices
- Impact: Streamlined data flow across reporting and analytics features, while delivering 2Ă— more reports than before.
- Constraint: Maintained 100% data accuracy under increased throughput.
CRM Analytics Engine (2024–2025)
Real-time analytics across investment transactions.
Why: Traditional RDBMS systems couldn’t handle aggregation queries over large volumes of transactional data in real time.
- Tech: ClickHouse, FastAPI, Celery
- Result: Reduced server load by 40% using exponential backoff and Redis caching.
- What broke: Early ingestion patterns caused “too many parts” errors in ClickHouse; fixed via batching and ingestion tuning.
Gradonix (2025)
Backend platform for academic workflows and data management.
Why: Existing systems were fragmented and hard to scale across institutions.
- Tech: NexJS, MongoDB, Redis
- Focus: Clean domain modeling, role-based access, and extensible data workflows.
- Learned: Getting schemas and permissions right early saves massive refactors later.
Things I believe
- I prefer boring tech at scale.
- I don’t trust systems I can’t explain from first principles.
- Side projects are for learning, not hype.
- LLMs are tools, not substitutes for fundamentals.
Roadmap (2026)
- Master database internals (LSM Trees, B-Trees)
- Ship 2 infra-heavy side projects
- Write 5 deep-dive technical blogs
- Contribute to an open-source data engine
Contact
If you like serious systems or boring tech done right — reach out.