Noundry.AIG.Core 1.1.1

dotnet add package Noundry.AIG.Core --version 1.1.1
                    
NuGet\Install-Package Noundry.AIG.Core -Version 1.1.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Noundry.AIG.Core" Version="1.1.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Noundry.AIG.Core" Version="1.1.1" />
                    
Directory.Packages.props
<PackageReference Include="Noundry.AIG.Core" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Noundry.AIG.Core --version 1.1.1
                    
#r "nuget: Noundry.AIG.Core, 1.1.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Noundry.AIG.Core@1.1.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Noundry.AIG.Core&version=1.1.1
                    
Install as a Cake Addin
#tool nuget:?package=Noundry.AIG.Core&version=1.1.1
                    
Install as a Cake Tool

Noundry AI Gateway (AIG)

A feature-complete AI Gateway for .NET/C# developers - similar to Vercel's AI Gateway but built for the .NET ecosystem.

Overview

Noundry AI Gateway provides a unified interface to access multiple AI providers (OpenAI, Anthropic, Google Gemini, and more) through a single, consistent API. It abstracts away the complexity of working with different AI providers, handles failover, supports streaming, and includes built-in analytics.

Features

Core Capabilities

  • Multi-Provider Support: OpenAI, Anthropic (Claude), Google (Gemini), and extensible for more
  • Unified API: One consistent interface across all providers
  • Provider Fallback: Automatic failover with the order parameter
  • Streaming Support: Real-time response streaming from all providers
  • Thread-Safe Client: Built with HttpClientFactory for production use
  • Analytics & Logging: Built-in request/response logging using Tuxedo
  • No Markup: Direct pass-through pricing when using your own API keys

Advanced Features

  • Prompt Builder: Fluent API for building complex prompts
  • Chain Prompt Builder: Chain multiple AI calls where output feeds into the next prompt
  • TOON Serialization: Convert DataTable, DataSet, and collections to token-efficient TOON format for LLM consumption (~40% token savings)
  • Dynamic Serialization: Automatically excludes null parameters per provider requirements
  • Rate Limiting Ready: Infrastructure for rate limiting (easily extensible)
  • Caching Ready: Infrastructure for response caching (easily extensible)

Project Structure

Noundry.AIG/
├── src/
│   ├── Noundry.AIG.Core/              # Core models, interfaces, enums
│   ├── Noundry.AIG.Providers/         # Provider implementations
│   ├── Noundry.AIG.Client/            # Client library (NuGet package)
│   └── Noundry.AIG.Api/               # Web API gateway
└── examples/
    └── Noundry.AIG.Examples/          # Example usage

Hosted API

The Noundry AI Gateway is available as a hosted service at https://api.noundry.ai. This is a BYOK (Bring Your Own Key) API - you provide your own provider API keys and the gateway relays requests to the appropriate provider.

Using the Hosted API

curl -X POST "https://api.noundry.ai/v1/chat/completions" \
  -H "Authorization: Bearer YOUR_GATEWAY_KEY" \
  -H "X-API-KEY: Bearer YOUR_OPENAI_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

How BYOK Works

Header Purpose
Authorization Your gateway API key (for authentication to the gateway)
X-API-KEY Your provider API key (passed through to OpenAI/Anthropic/Google)

The gateway acts as a relay - it routes your request to the appropriate provider based on the model string (e.g., openai/gpt-4 → OpenAI API, anthropic/claude-sonnet-4 → Anthropic API) and passes your provider key directly. No markup on provider costs.


Quick Start

1. Install the Client Library

dotnet add package Noundry.AIG.Client

2. Configure Providers

var httpClientFactory = new SimpleHttpClientFactory(); // Or use DI
var providerFactory = new ProviderFactory(httpClientFactory);

var options = new AigClientOptions
{
    UseLocalProviders = true,
    EnableRetries = true,
    MaxRetries = 3,
    ProviderConfigs = new Dictionary<string, ProviderConfig>
    {
        ["openai"] = new ProviderConfig
        {
            ApiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY")
        },
        ["anthropic"] = new ProviderConfig
        {
            ApiKey = Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY")
        }
    }
};

var aigClient = new AigClient(providerFactory, options);

3. Send Your First Request

var prompt = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithTemperature(0.7f)
    .AddSystemMessage("You are a helpful AI assistant.")
    .AddUserMessage("Explain quantum computing in simple terms.")
    .Build();

var response = await aigClient.SendAsync(prompt);
Console.WriteLine(response.GetTextContent());

Usage Examples

Simple Prompt

var prompt = new PromptBuilder()
    .WithModel("openai/gpt-4")
    .AddUserMessage("What are the three primary colors?")
    .Build();

var response = await aigClient.SendAsync(prompt);

Multi-Provider Fallback

Try multiple providers in order until one succeeds:

var prompt = new PromptBuilder()
    .WithModels("openai/gpt-4", "anthropic/claude-sonnet-4", "google/gemini-pro")
    .AddUserMessage("Generate a creative story idea.")
    .Build();

var multiResponse = await aigClient.SendMultiAsync(prompt);

if (multiResponse.HasSuccess)
{
    Console.WriteLine($"First successful response from: {multiResponse.FirstSuccess.Provider}");
    Console.WriteLine(multiResponse.FirstSuccess.GetTextContent());
}

Streaming Responses

var prompt = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithStreaming(true)
    .AddUserMessage("Write a poem about nature.")
    .Build();

await foreach (var chunk in aigClient.SendStreamAsync(prompt))
{
    Console.Write(chunk.GetTextContent());
}

Chain Prompts

Execute a series of prompts where each output feeds into the next:

var chain = new ChainPromptBuilder()
    .WithDefaultModel("anthropic/claude-sonnet-4")

    // Step 1: Generate an idea
    .AddStep("Generate Idea", _ =>
        new PromptBuilder()
            .AddUserMessage("Give me a random creative writing topic."))

    // Step 2: Write about it
    .AddStep("Write Content", previousOutput =>
        new PromptBuilder()
            .AddUserMessage($"Write a short paragraph about: {previousOutput}"))

    // Step 3: Translate
    .AddStep("Translate", previousOutput =>
        new PromptBuilder()
            .WithModel("openai/gpt-4")
            .AddUserMessage($"Translate to Spanish:\n{previousOutput}"));

var result = await chain.ExecuteAsync(aigClient);

if (result.Success)
{
    Console.WriteLine($"Final output: {result.FinalOutput}");
}

TOON Serialization for LLM Data

Convert data structures to TOON (Token-Oriented Object Notation) format for efficient LLM consumption. TOON reduces token usage by ~40% compared to JSON while maintaining data fidelity.

using Noundry.AIG.Core.Extensions;
using System.Data;

// DataTable to TOON
var table = new DataTable("sales");
table.Columns.Add("product", typeof(string));
table.Columns.Add("qty", typeof(int));
table.Columns.Add("price", typeof(decimal));
table.Rows.Add("Widget", 10, 9.99m);
table.Rows.Add("Gadget", 5, 14.50m);

var toonData = table.ToToon();
// Output:
// sales[2]{product,qty,price}:
//   Widget,10,9.99
//   Gadget,5,14.5

// Collection to TOON
var users = new List<User>
{
    new User { Id = 1, Name = "Alice", Role = "admin" },
    new User { Id = 2, Name = "Bob", Role = "user" }
};

var toonUsers = users.ToToon(new ToonOptions { RootName = "users" });
// Output:
// users[2]{Id,Name,Role}:
//   1,Alice,admin
//   2,Bob,user

// Use with AI requests
var request = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .AddSystemMessage("Analyze the sales data provided in TOON format.")
    .AddUserMessage($"Sales data:\n\n{toonData}\n\nProvide insights.")
    .Build();

TOON Format Specification: https://github.com/toon-format/toon

Advanced Configuration

var prompt = new PromptBuilder()
    .WithModel("anthropic/claude-sonnet-4")
    .WithTemperature(0.8f)
    .WithMaxTokens(2000)
    .WithTopP(0.9f)
    .WithStopSequences("END", "STOP")
    .WithRepetitionPenalty(1.15f)
    .AddSystemMessage("You are a creative writing assistant.")
    .AddUserMessage("Write a sci-fi story opening.")
    .Build();

Web API Usage

The AI Gateway is deployed and available at https://api.noundry.ai. See the Hosted API section above for usage.

Self-Hosting

To run your own instance:

cd src/Noundry.AIG.Api
dotnet run

The API will be available at https://localhost:5001 (or configured port).

API Endpoints

POST /v1/chat/completions

Send a chat completion request.

Headers:

  • Authorization: Bearer YOUR_GATEWAY_API_KEY
  • X-API-KEY: Bearer YOUR_PROVIDER_API_KEY (optional)

Request Body:

{
  "model": "anthropic/claude-sonnet-4",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Why is the sky blue?"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 1000,
  "stream": false
}

Multi-Provider Request:

{
  "model": "openai/gpt-4",
  "order": ["openai/gpt-4", "anthropic/claude-sonnet-4", "google/gemini-pro"],
  "messages": [
    {
      "role": "user",
      "content": "What are the three primary colors?"
    }
  ]
}
GET /v1/analytics/logs

Get recent request logs.

GET /v1/analytics/usage

Get provider usage statistics.

cURL Examples

Simple Request:

curl -X POST "https://api.noundry.ai/v1/chat/completions" \
  -H "Authorization: Bearer demo-key-12345" \
  -H "X-API-KEY: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4",
    "messages": [
      {
        "role": "user",
        "content": "Why is the sky blue?"
      }
    ],
    "stream": false
  }'

Multi-Provider with Fallback:

curl -X POST "https://api.noundry.ai/v1/chat/completions" \
  -H "Authorization: Bearer demo-key-12345" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4",
    "order": ["openai/gpt-4", "anthropic/claude-sonnet-4"],
    "messages": [
      {
        "role": "user",
        "content": "What are the three primary colors?"
      }
    ]
  }'

Configuration

appsettings.json

{
  "ConnectionStrings": {
    "Analytics": "Data Source=aigw_analytics.db"
  },
  "ApiKeys": [
    "demo-key-12345",
    "your-api-key-here"
  ],
  "Providers": {
    "openai": {
      "ApiKey": "YOUR_OPENAI_API_KEY",
      "TimeoutSeconds": "120"
    },
    "anthropic": {
      "ApiKey": "YOUR_ANTHROPIC_API_KEY",
      "ApiVersionHeaderValue": "2023-06-01",
      "TimeoutSeconds": "120"
    },
    "google": {
      "ApiKey": "YOUR_GOOGLE_API_KEY",
      "TimeoutSeconds": "120"
    }
  }
}

Architecture

Core Components

  1. Noundry.AIG.Core: Shared models, interfaces, and extensions
  2. Noundry.AIG.Providers: Provider-specific implementations
    • OpenAiProvider: OpenAI GPT models
    • AnthropicProvider: Anthropic Claude models
    • GoogleProvider: Google Gemini models
  3. Noundry.AIG.Client: Thread-safe client library with builders
  4. Noundry.AIG.Api: ASP.NET Core Web API gateway

Key Design Patterns

  • Factory Pattern: ProviderFactory for creating provider instances
  • Builder Pattern: PromptBuilder and ChainPromptBuilder for fluent API
  • Repository Pattern: IAnalyticsRepository for data access
  • Strategy Pattern: Provider implementations via IAiProvider interface

Data Access with Tuxedo

The API uses Noundry Tuxedo (Dapper-based ORM) for analytics and logging:

builder.Services.AddScoped<IDbConnection>(sp =>
{
    var connectionString = builder.Configuration.GetConnectionString("Analytics");
    var connection = new SqliteConnection(connectionString);
    connection.Open();
    return connection;
});

builder.Services.AddScoped<IAnalyticsRepository, AnalyticsRepository>();

Supported Models

OpenAI

  • GPT-4 Turbo
  • GPT-4
  • GPT-3.5 Turbo
  • All OpenAI chat models

Anthropic

  • Claude 3.5 Sonnet
  • Claude 3 Opus
  • Claude 3 Sonnet
  • Claude 3 Haiku

Google

  • Gemini Pro
  • Gemini Pro Vision
  • Gemini Ultra

Custom OpenAI-Compatible Endpoints

The OpenAI provider supports custom endpoints for services like Nebius, Together AI, Groq, Ollama, and other OpenAI-compatible APIs.

Configuration

var options = new AigClientOptions
{
    ProviderConfigs = new Dictionary<string, ProviderConfig>
    {
        ["openai"] = new ProviderConfig
        {
            ApiKey = "your-api-key",
            ApiEndpoint = "https://api.tokenfactory.nebius.com/v1/chat/completions"
        }
    }
};

Model Name Handling

When using custom endpoints, the full model name is preserved (after stripping the routing prefix):

Request Model Sent to API
openai/openai/gpt-oss-120b openai/gpt-oss-120b
openai/llama-3-70b llama-3-70b
openai/gpt-4o gpt-4o

For standard OpenAI (api.openai.com), the provider prefix is always stripped:

  • openai/gpt-4ogpt-4o

Supported Custom Endpoints

Provider Base URL Example Model
Nebius https://api.tokenfactory.nebius.com/v1 openai/gpt-oss-120b
Together AI https://api.together.xyz/v1 meta-llama/Llama-3-70b-chat-hf
Groq https://api.groq.com/openai/v1 llama3-70b-8192
Ollama http://localhost:11434/v1 llama3
LM Studio http://localhost:1234/v1 local-model

Direct Config Usage

For custom providers, pass the config directly to avoid routing lookup issues:

var config = new ProviderConfig
{
    ApiKey = "your-key",
    ApiEndpoint = "https://api.tokenfactory.nebius.com/v1/chat/completions"
};

var prompt = new PromptBuilder()
    .WithModel("openai/openai/gpt-oss-120b")  // Full model path
    .AddUserMessage("Hello!")
    .Build();

// Pass config directly
await foreach (var chunk in aigClient.SendStreamAsync(prompt, config))
{
    Console.Write(chunk.GetTextContent());
}

Extending with New Providers

To add a new provider:

  1. Create a new provider class inheriting from BaseAiProvider
  2. Implement provider-specific request/response transformation
  3. Register in ProviderFactory
public class NewProvider : BaseAiProvider
{
    public override string ProviderName => "newprovider";

    public override string GetDefaultEndpoint() => "https://api.newprovider.com";

    public override async Task<AiResponse> SendCompletionAsync(...)
    {
        // Implementation
    }
}

Performance Considerations

  • Uses HttpClientFactory for efficient HTTP connection management
  • Thread-safe operations with SemaphoreSlim
  • Configurable retry logic with exponential backoff
  • Streaming support to reduce latency
  • Connection pooling via SqliteConnection

Security

  • API key authentication via middleware
  • Provider API keys stored in configuration (use Azure Key Vault in production)
  • No API keys logged in analytics
  • HTTPS enforcement

Analytics & Monitoring

Built-in request logging tracks:

  • Provider and model used
  • Token usage (input, output, total)
  • Request duration
  • Success/failure status
  • Client IP address

Access via:

  • GET /v1/analytics/logs - Recent request logs
  • GET /v1/analytics/usage?daysSince=7 - Provider usage stats

Roadmap

  • Additional provider support (Mistral, Groq, Cohere, xAI)
  • Rate limiting per API key
  • Response caching layer
  • Cost tracking and budgets
  • Webhooks for async requests
  • Admin dashboard

Contributing

Contributions are welcome! Please feel free to submit issues and pull requests.

License

MIT License - see LICENSE file for details.

Support

For issues, questions, or feature requests, please open an issue on GitHub.


Built with .NET 9.0 / .NET 10.0 | Powered by Noundry Tuxedo | Thread-Safe | Production-Ready

Product Compatible and additional computed target framework versions.
.NET net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • net10.0

    • No dependencies.
  • net9.0

    • No dependencies.

NuGet packages (2)

Showing the top 2 NuGet packages that depend on Noundry.AIG.Core:

Package Downloads
Noundry.AIG.Providers

AI provider implementations for Noundry AI Gateway. Supports OpenAI, Anthropic, Google, and custom OpenAI-compatible endpoints.

Noundry.AIG.Client

Client library for Noundry AI Gateway. Unified interface for OpenAI, Anthropic, Google, and custom OpenAI-compatible endpoints.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.1.1 123 3/4/2026
1.1.0 111 3/4/2026