Dart logo
SQL Server logo
Technology logo
Technology logo

My Go-To .NET Libraries (And Why They Made the Cut)

Vishnu Unnikrishnan

Vishnu Unnikrishnan

February 3, 2026
8 min read
My Go-To .NET Libraries (And Why They Made the Cut)

Let me be honest: I used to reinvent the wheel. A lot. Custom logging frameworks, hand-rolled validation, retry logic scattered across services, you name it, I probably built it from scratch at some point. Then I learned a painful truth: the best code is often the code you don't write.

This post is about the nine libraries that fundamentally changed how I build .NET applications. These aren't just tools I tried once and liked, they're dependencies I've relied on across multiple projects, through tight deadlines and unexpected scaling challenges. They've proven themselves where it counts: in production, under pressure, and in codebases I'd actually choose to maintain again.

Whether you're starting a new project or looking to level up an existing one, here's the toolkit that's earned a permanent spot in my csproj.


1. Serilog - Structured Logging Made Simple#

What it does: Serilog is a structured logging library that makes log data easier to search, analyze, and monitor.

Why I chose it: Unlike traditional text-based logging, Serilog captures log events as structured data. This meant I could easily query logs in production to find specific errors or track user behavior patterns.

Key benefits in my project:

  • Seamless integration with Application Insights and Seq for centralized logging
  • Rich contextual information with log enrichers (machine name, thread ID, user claims)
  • Minimal performance overhead with asynchronous sinks
  • Easy filtering and log level configuration per namespace

Example usage:

Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Information()
    .WriteTo.Console()
    .WriteTo.File("logs/app-.txt", rollingInterval: RollingInterval.Day)
    .CreateLogger();

Log.Information("User {UserId} completed checkout with {ItemCount} items", userId, itemCount);

2. Entity Framework Core - The ORM Powerhouse#

What it does: EF Core is Microsoft's object-relational mapper that eliminates most data-access code you'd normally need to write.

Why I chose it: For complex domain models with relationships, EF Core provides excellent developer productivity. The LINQ queries are type-safe and the change tracking simplifies data updates.

Key benefits in my project:

  • Migrations made database schema evolution painless
  • Navigation properties simplified working with related entities
  • Built-in support for optimistic concurrency
  • Excellent integration with dependency injection and the .NET ecosystem

Best practices I followed:

  • Used .AsNoTracking() for read-only queries to improve performance
  • Implemented the repository pattern for better testability
  • Leveraged compiled queries for frequently executed operations

3. Dapper - Micro-ORM for Performance-Critical Operations#

What it does: Dapper is a lightweight object mapper that extends IDbConnection with high-performance CRUD operations.

Why I chose it alongside EF Core: While EF Core handled most operations, I needed raw SQL performance for complex reporting queries and bulk operations.

Key benefits in my project:

  • Lightning-fast query execution (nearly as fast as using raw ADO.NET)
  • Simple mapping of query results to POCOs
  • Multi-mapping support for complex joins
  • No change tracking overhead

When I used it:

// Complex reporting query that would be inefficient in EF Core
var report = await connection.QueryAsync<SalesReport>(
    @"SELECT p.Name, SUM(oi.Quantity) as TotalSold, SUM(oi.Price * oi.Quantity) as Revenue
      FROM Products p
      JOIN OrderItems oi ON p.Id = oi.ProductId
      WHERE oi.OrderDate >= @StartDate
      GROUP BY p.Name
      ORDER BY Revenue DESC",
    new { StartDate = DateTime.UtcNow.AddMonths(-1) }
);

4. FluentValidation - Elegant Input Validation#

What it does: FluentValidation is a validation library that uses a fluent interface and lambda expressions for building strongly-typed validation rules.

Why I chose it: It separates validation logic from domain models and provides much more flexibility than data annotations.

Key benefits in my project:

  • Clean separation of concerns with dedicated validator classes
  • Easy to write complex, conditional validation rules
  • Excellent integration with ASP.NET Core (automatic model validation)
  • Reusable validation rules across different scenarios

Example validator:

public class CreateOrderValidator : AbstractValidator<CreateOrderCommand>
{
    public CreateOrderValidator()
    {
        RuleFor(x => x.CustomerId)
            .NotEmpty()
            .WithMessage("Customer ID is required");

        RuleFor(x => x.Items)
            .NotEmpty()
            .Must(items => items.Count > 0)
            .WithMessage("Order must contain at least one item");

        RuleFor(x => x.ShippingAddress)
            .SetValidator(new AddressValidator())
            .When(x => x.RequiresShipping);
    }
}

5. Polly - Resilience and Fault Handling#

What it does: Polly is a resilience and transient-fault-handling library that allows you to express policies like Retry, Circuit Breaker, Timeout, and Bulkhead Isolation.

Why I chose it: Modern applications integrate with many external services. Polly helped me handle transient failures gracefully without cluttering business logic.

Key benefits in my project:

  • Automatic retry with exponential backoff for HTTP calls
  • Circuit breaker prevented cascading failures
  • Timeout policies protected against hanging requests
  • Combined policies created sophisticated resilience strategies

Practical implementation:

var retryPolicy = Policy
    .Handle<HttpRequestException>()
    .WaitAndRetryAsync(3, retryAttempt => 
        TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));

var circuitBreaker = Policy
    .Handle<HttpRequestException>()
    .CircuitBreakerAsync(5, TimeSpan.FromMinutes(1));

var combinedPolicy = Policy.WrapAsync(retryPolicy, circuitBreaker);

await combinedPolicy.ExecuteAsync(async () => 
    await _httpClient.GetAsync("https://api.external.com/data"));

6. Hangfire - Background Job Processing#

What it does: Hangfire is a background job processor that allows you to process jobs outside of request-response cycle.

Why I chose it: I needed reliable background processing for email notifications, report generation, and data synchronization without managing message queues.

Key benefits in my project:

  • Persistent storage ensures jobs aren't lost on application restart
  • Built-in retry mechanism with exponential backoff
  • User-friendly dashboard for monitoring job execution
  • Support for recurring jobs (cron-like scheduling)

Use cases:

// Fire-and-forget
BackgroundJob.Enqueue(() => SendWelcomeEmail(userId));

// Delayed execution
BackgroundJob.Schedule(() => SendReminderEmail(userId), TimeSpan.FromDays(7));

// Recurring jobs
RecurringJob.AddOrUpdate("daily-report", 
    () => GenerateDailyReport(), 
    Cron.Daily);

7. OpenTelemetry - Observability and Distributed Tracing#

What it does: OpenTelemetry provides a unified set of APIs and libraries to generate, collect, and export telemetry data (traces, metrics, logs).

Why I chose it: In a microservices architecture, understanding request flow across services is crucial. OpenTelemetry gave me vendor-neutral observability.

Key benefits in my project:

  • Distributed tracing showed exact bottlenecks in request pipelines
  • Automatic instrumentation for ASP.NET Core, HttpClient, and EF Core
  • Vendor-agnostic (can export to Jaeger, Prometheus, Azure Monitor, etc.)
  • Rich context propagation across service boundaries

Configuration:

services.AddOpenTelemetry()
    .WithTracing(builder => builder
        .AddAspNetCoreInstrumentation()
        .AddHttpClientInstrumentation()
        .AddEntityFrameworkCoreInstrumentation()
        .AddJaegerExporter())
    .WithMetrics(builder => builder
        .AddAspNetCoreInstrumentation()
        .AddPrometheusExporter());

8. Scalar - Modern API Documentation#

What it does: Scalar is a modern, interactive API documentation tool that provides a beautiful UI for exploring and testing APIs.

Why I chose it: While Swagger/OpenAPI is standard, Scalar offers a more polished, developer-friendly experience with better performance.

Key benefits in my project:

  • Clean, modern UI that clients actually enjoyed using
  • Fast rendering even with large API specifications
  • Interactive API testing directly in the browser
  • Better customization options for branding

Integration:

app.MapScalarApiReference(options =>
{
    options.Title = "My API Documentation";
    options.Theme = ScalarTheme.Purple;
});

9. Redis - High-Performance Caching and Data Store#

What it does: Redis is an in-memory data structure store used as a cache, message broker, and database.

Why I chose it: Application performance demanded fast data access, and Redis provided sub-millisecond response times.

Key benefits in my project:

  • Dramatically reduced database load by caching frequently accessed data
  • Distributed caching across multiple application instances
  • Session storage for user data in stateless web applications
  • Pub/Sub for real-time features

Caching strategy:

public async Task<Product> GetProductAsync(int productId)
{
    var cacheKey = $"product:{productId}";
    
    var cached = await _redis.StringGetAsync(cacheKey);
    if (cached.HasValue)
        return JsonSerializer.Deserialize<Product>(cached);
    
    var product = await _dbContext.Products.FindAsync(productId);
    
    await _redis.StringSetAsync(
        cacheKey, 
        JsonSerializer.Serialize(product),
        TimeSpan.FromMinutes(30)
    );
    
    return product;
}

Conclusion: Building on the Shoulders of Giants#

Looking back at this project, I'm struck by how much these nine libraries transformed what could have been months of custom infrastructure into weeks of focused business logic. That's not just efficiency, that's leverage.

Here's what this stack actually delivered:

ChallengeSolutionImpact
Debugging production issuesSerilog + OpenTelemetryFrom hours of guesswork to minutes of clarity
Data access flexibilityEF Core + DapperDeveloper speed and raw performance when needed
Input validation chaosFluentValidationClean, testable, maintainable rules
External service failuresPollyGraceful degradation instead of cascading crashes
Background processingHangfireFire-and-forget with full visibility
API documentationScalarDocs that developers actually want to use
Performance bottlenecksRedisSub-millisecond responses at scale

The real magic isn't any single library, it's how they compose together. Serilog feeds into OpenTelemetry. Polly wraps HttpClient calls that hit Redis-cached data. Hangfire jobs use FluentValidation before processing. Each piece amplifies the others.

The deeper lesson? Great software isn't about writing clever code. It's about assembling the right building blocks and focusing your energy where it actually matters, solving problems unique to your domain.

These libraries let me ship faster, debug easier, and sleep better knowing the application could handle whatever production threw at it. That's not just technical success. That's the kind of outcome that makes this work genuinely satisfying.


Now I'm Curious About Your Stack#

Which library here is your ride-or-die? The one you'd refuse to start a project without?

What am I missing? There's always that one library someone mentions that makes you think, "Where has this been all my life?"

Disagree with any of these picks? Even better. The best technical decisions come from honest debate, not echo chambers.

Drop a comment below. I read every one, and the best recommendations might just make it into a follow-up post.


Building something interesting with .NET? I'd love to hear what's in your toolkit.

Comments

Loading comments...