top of page
episilia main page creative-02.png


Logs > 1TB per month?

Episilia is built for you!


"Episilia has been indeed helping us in a big way to
manage 2TB per day of logs in production".

- VP Devops @ 


Sunil Guttula

Hi, I’m Sunil Guttula, architect of Episilia and CEO of 91social. I’ve been a hands-on developer for the last 25 years, building and supporting systems in production. We built Episilia so you can log as much as you want without worrying about latency and cost.

Here you’ll find a simple, technical and detailed overview of Episilia and all that it can do for your business as a tech solution, so you can make an informed choice quickly.

  1. Why did we build Episilia?

  2. How is Episilia fast and cheap for high volume logs?

  3. What does it cost for X TBs of daily logs?

  4. What features does Episilia support and what does it not?

  5. How can you try out Episilia?

I do hope you find Episilia useful. Happy logging!

Sunil Guttula

episilia main page design-01.png
The reason  |

Why build Episilia?

Why one more log management system?

Logs are quite distinct from business transaction data in that they are:

  1. High volume; 10x-100x more than transaction data

  2. Low budget; logs are expected to be managed with lower budgets

  3. Of varying value; critical during incidents and low value at other times 

Current log management systems share these common traits

  1. They process logs using s/w built for business data - Lucene, Elastic, Clickhouse

  2. They store logs in the same way as business data

  3. They end up being slow and costly at high volumes

So we engineered Episilia from the ground up to handle high volume logs faster and cheaper.

  1. We designed new data structures optimized for log storage and retrieval 

  2. Grouped, indexed and compressed logs for low storage footprint

  3. Storage separated from compute; logs loaded on-demand for search

  4. Used the most compute efficient language, C++ with SIMD

"Get the developers to log less, we got to bring down the cost"
CTO of a unicorn
Teams with high volume logs rely on Episilia to cut costs by 3x or more


"At we generate a huge amount of logs to discover and solve the bugs on the fast evolving tech stack. With high throughput ingestion and seamless integration with our Kubernetes cluster, Episilia is very efficient in resource utilization and performance. It helps us analyze our logs from the centralized Grafana dashboard. At the same time, centralized logging with Episilia helps our business technologies and security team for root cause analysis."


Abhinasha Karana

Technology Architect @ nurture farm

The design  | 

Fast and affordable

Episilia is cost-effective because it is fast

  • Episilia is fast because of its data structures, algorithms, high performance design and coding.

  • C++ with SIMD gets the most out of a CPU, which costs 4X of RAM. So we coded Episilia in C++ with SIMD.

  • New data structures exploit repeatability of values in logs and data affinity to apps.

  • Indexing uses highly tuned bloom filters with index size @ 1.5% of data size.

  • LZ4 used for fast (de)compression. 1 TB of logs with index compresses to 100GB storage at rest.

  • SIMD Indexer ingests logs @ 10MBps per core; 3TB of daily log ingestion needs a maximum of 4 cores.

  • Logs and all associated metadata are stored only in S3, no database needed. This keeps clusters stateless and scales well.

  • Indexing and search operate independently; search loads logs on-demand keeping compute requirements low.

  • A search query runs at 350 mcores and 300MB mem max; 10 concurrent queries need 4 Cores + 4GB RAM.

  • 98% of search queries return in 1-5 seconds, while reading logs from S3 block storage.

  • Log tail has a latency of 3-5 seconds from log capture to user console.

  • Alerts on logs and tail execute on the same consumer, keeping cost of real-time processing low.


"We did pre-mature optimization every step of the way in building Episilia, and broke a few more rules along the way." 
- Architect, Episilia

The cost  | 

$100 per TB per year

All inclusive - compute, block storage, 12 months retention and license

Estimated price for 1TB - 25TB logs per day

Price includes :

Features included.png

Features included :

Ingestion, Indexing, Search, Tail logs, Alerts on logs



Unlimited. Concurrent users are assumed to be 20 for the calculation.


Logs retention :

12 months logs are stored in S3


Deployment :

On-premise or client's cloud account


Cost includes :

Compute infra, block storage, License cost

Cost excludes :

Log collectors, log transfer to Episilia. 

Cost of Episilia compared to its peers
Get the detailed price calculator

Thanks for submitting!

The features  | 

Log without limits

Debug issues, secure business, understand users

Features aligned to logs flow
collecting logs.png
Collecting logs

Sources: Kafka, HTTP, S3

  • Supports Open Telemetry and generic JSON message format for logs

  • Logs can be transferred via Kafka, HTTP,  S3 to Episilia

  • Timestamp derived from logs or stamped on arrival

  • Independent of log collectors - Fluent Bit, Vector, Filebeat


10-20 MBps per core

  • Indexing at 10-20 MBps per core; roughly 1TB per core per day

  • Index is 1.5% of data size, covers keyword + regex

  • Indexer instances can be started/stopped to scale with no data loss 

  • Group related logs together by virtual app IDs

tailing & alerting.png
Tailing and Alerting

K8s to console ~ 3 seconds

  • Logs tailed to Episilia console in under 3 seconds from origin

  • Tail logs support keyword and regex filters 

  • Alerts on logs typically delivered within 5 seconds from origin

  • Alerts delivered to Slack, Pagerduty, email etc


1-5 seconds, 1M results

  • All logs available to search any time; no need to pre-load

  • Search queries support keyword + PCRE compliant regex queries

  • A search query needs a max of 350 millicores and 300MB RAM

  • 98% of search queries return in between 1-5 seconds

  • Large search results > 1M downloaded to a csv file for analysis 

  • Logs fetched from S3 on-demand; no hard dependency on disk

  • Frequently read logs are cached to local disk and embedded RocksDB

  • Search instances can be started/stopped to scale

  • Grafana Loki browser supported besides Episilia native console

  • Save queries; share queries and search results with your team

storage & archival.png
Storage and archival

No DB, only S3

  • Data and index files are stored only in S3

  • All metadata stored in S3; no dependency on a live database

  • Supports any S3 service - MinIO, Azure, GCP,  DigitalOcean, Alibaba, OCP  

  • S3 files in date/hour folders; simple copy to archive/restore

Open and extensible

Open file formats and APIs

  • Log data files in Hadoop sequence file open format

  • Files in S3 can be processed by Hive or any HDFS engine 

  • Search, tail, alerts available as APIs to integrate into third party apps 

  • Supports Loki query format for search queries

easy ops.png
Easy ops

Self-throttling, auto scaling

  • Episilia cluster is stateless; no state in any live database or disk

  • Deployed via Helm charts or docker manifests 

  • Standard K8s/container nodes with AVX compatible cores for SIMD

  • Any service can be started/restarted/stopped any time

  • Distributed cluster with co-operating nodes; no leader election

  • Local redpanda for service co-ordination and communication

  • Prometheus-compatible for monitoring

  • All services support pause/resume control to avoid stop/restart

  • In-built throttling on memory usage; no crashes on high loads 

  • Fine grained metrics collected to analyze and tune throughput 

Try it out  | 

1TB/month free

Try it in your cloud

  1. Contact us for a trial account

  2. Install using Helm charts, instructions here

  3. Send logs to Episilia cluster and view them

Contact us

Thanks for contacting us, we will get back to you within 24 hours.

bottom of page