Noundry.AIG.Client
1.1.1
dotnet add package Noundry.AIG.Client --version 1.1.1
NuGet\Install-Package Noundry.AIG.Client -Version 1.1.1
<PackageReference Include="Noundry.AIG.Client" Version="1.1.1" />
<PackageVersion Include="Noundry.AIG.Client" Version="1.1.1" />
<PackageReference Include="Noundry.AIG.Client" />
paket add Noundry.AIG.Client --version 1.1.1
#r "nuget: Noundry.AIG.Client, 1.1.1"
#:package Noundry.AIG.Client@1.1.1
#addin nuget:?package=Noundry.AIG.Client&version=1.1.1
#tool nuget:?package=Noundry.AIG.Client&version=1.1.1
Noundry AI Gateway (AIG)
A feature-complete AI Gateway for .NET/C# developers - similar to Vercel's AI Gateway but built for the .NET ecosystem.
Overview
Noundry AI Gateway provides a unified interface to access multiple AI providers (OpenAI, Anthropic, Google Gemini, and more) through a single, consistent API. It abstracts away the complexity of working with different AI providers, handles failover, supports streaming, and includes built-in analytics.
Features
Core Capabilities
- Multi-Provider Support: OpenAI, Anthropic (Claude), Google (Gemini), and extensible for more
- Unified API: One consistent interface across all providers
- Provider Fallback: Automatic failover with the
orderparameter - Streaming Support: Real-time response streaming from all providers
- Thread-Safe Client: Built with
HttpClientFactoryfor production use - Analytics & Logging: Built-in request/response logging using Tuxedo
- No Markup: Direct pass-through pricing when using your own API keys
Advanced Features
- Prompt Builder: Fluent API for building complex prompts
- Chain Prompt Builder: Chain multiple AI calls where output feeds into the next prompt
- TOON Serialization: Convert DataTable, DataSet, and collections to token-efficient TOON format for LLM consumption (~40% token savings)
- Dynamic Serialization: Automatically excludes null parameters per provider requirements
- Rate Limiting Ready: Infrastructure for rate limiting (easily extensible)
- Caching Ready: Infrastructure for response caching (easily extensible)
Project Structure
Noundry.AIG/
├── src/
│ ├── Noundry.AIG.Core/ # Core models, interfaces, enums
│ ├── Noundry.AIG.Providers/ # Provider implementations
│ ├── Noundry.AIG.Client/ # Client library (NuGet package)
│ └── Noundry.AIG.Api/ # Web API gateway
└── examples/
└── Noundry.AIG.Examples/ # Example usage
Hosted API
The Noundry AI Gateway is available as a hosted service at https://api.noundry.ai. This is a BYOK (Bring Your Own Key) API - you provide your own provider API keys and the gateway relays requests to the appropriate provider.
Using the Hosted API
curl -X POST "https://api.noundry.ai/v1/chat/completions" \
-H "Authorization: Bearer YOUR_GATEWAY_KEY" \
-H "X-API-KEY: Bearer YOUR_OPENAI_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
How BYOK Works
| Header | Purpose |
|---|---|
Authorization |
Your gateway API key (for authentication to the gateway) |
X-API-KEY |
Your provider API key (passed through to OpenAI/Anthropic/Google) |
The gateway acts as a relay - it routes your request to the appropriate provider based on the model string (e.g., openai/gpt-4 → OpenAI API, anthropic/claude-sonnet-4 → Anthropic API) and passes your provider key directly. No markup on provider costs.
Quick Start
1. Install the Client Library
dotnet add package Noundry.AIG.Client
2. Configure Providers
var httpClientFactory = new SimpleHttpClientFactory(); // Or use DI
var providerFactory = new ProviderFactory(httpClientFactory);
var options = new AigClientOptions
{
UseLocalProviders = true,
EnableRetries = true,
MaxRetries = 3,
ProviderConfigs = new Dictionary<string, ProviderConfig>
{
["openai"] = new ProviderConfig
{
ApiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY")
},
["anthropic"] = new ProviderConfig
{
ApiKey = Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY")
}
}
};
var aigClient = new AigClient(providerFactory, options);
3. Send Your First Request
var prompt = new PromptBuilder()
.WithModel("anthropic/claude-sonnet-4")
.WithTemperature(0.7f)
.AddSystemMessage("You are a helpful AI assistant.")
.AddUserMessage("Explain quantum computing in simple terms.")
.Build();
var response = await aigClient.SendAsync(prompt);
Console.WriteLine(response.GetTextContent());
Usage Examples
Simple Prompt
var prompt = new PromptBuilder()
.WithModel("openai/gpt-4")
.AddUserMessage("What are the three primary colors?")
.Build();
var response = await aigClient.SendAsync(prompt);
Multi-Provider Fallback
Try multiple providers in order until one succeeds:
var prompt = new PromptBuilder()
.WithModels("openai/gpt-4", "anthropic/claude-sonnet-4", "google/gemini-pro")
.AddUserMessage("Generate a creative story idea.")
.Build();
var multiResponse = await aigClient.SendMultiAsync(prompt);
if (multiResponse.HasSuccess)
{
Console.WriteLine($"First successful response from: {multiResponse.FirstSuccess.Provider}");
Console.WriteLine(multiResponse.FirstSuccess.GetTextContent());
}
Streaming Responses
var prompt = new PromptBuilder()
.WithModel("anthropic/claude-sonnet-4")
.WithStreaming(true)
.AddUserMessage("Write a poem about nature.")
.Build();
await foreach (var chunk in aigClient.SendStreamAsync(prompt))
{
Console.Write(chunk.GetTextContent());
}
Chain Prompts
Execute a series of prompts where each output feeds into the next:
var chain = new ChainPromptBuilder()
.WithDefaultModel("anthropic/claude-sonnet-4")
// Step 1: Generate an idea
.AddStep("Generate Idea", _ =>
new PromptBuilder()
.AddUserMessage("Give me a random creative writing topic."))
// Step 2: Write about it
.AddStep("Write Content", previousOutput =>
new PromptBuilder()
.AddUserMessage($"Write a short paragraph about: {previousOutput}"))
// Step 3: Translate
.AddStep("Translate", previousOutput =>
new PromptBuilder()
.WithModel("openai/gpt-4")
.AddUserMessage($"Translate to Spanish:\n{previousOutput}"));
var result = await chain.ExecuteAsync(aigClient);
if (result.Success)
{
Console.WriteLine($"Final output: {result.FinalOutput}");
}
TOON Serialization for LLM Data
Convert data structures to TOON (Token-Oriented Object Notation) format for efficient LLM consumption. TOON reduces token usage by ~40% compared to JSON while maintaining data fidelity.
using Noundry.AIG.Core.Extensions;
using System.Data;
// DataTable to TOON
var table = new DataTable("sales");
table.Columns.Add("product", typeof(string));
table.Columns.Add("qty", typeof(int));
table.Columns.Add("price", typeof(decimal));
table.Rows.Add("Widget", 10, 9.99m);
table.Rows.Add("Gadget", 5, 14.50m);
var toonData = table.ToToon();
// Output:
// sales[2]{product,qty,price}:
// Widget,10,9.99
// Gadget,5,14.5
// Collection to TOON
var users = new List<User>
{
new User { Id = 1, Name = "Alice", Role = "admin" },
new User { Id = 2, Name = "Bob", Role = "user" }
};
var toonUsers = users.ToToon(new ToonOptions { RootName = "users" });
// Output:
// users[2]{Id,Name,Role}:
// 1,Alice,admin
// 2,Bob,user
// Use with AI requests
var request = new PromptBuilder()
.WithModel("anthropic/claude-sonnet-4")
.AddSystemMessage("Analyze the sales data provided in TOON format.")
.AddUserMessage($"Sales data:\n\n{toonData}\n\nProvide insights.")
.Build();
TOON Format Specification: https://github.com/toon-format/toon
Advanced Configuration
var prompt = new PromptBuilder()
.WithModel("anthropic/claude-sonnet-4")
.WithTemperature(0.8f)
.WithMaxTokens(2000)
.WithTopP(0.9f)
.WithStopSequences("END", "STOP")
.WithRepetitionPenalty(1.15f)
.AddSystemMessage("You are a creative writing assistant.")
.AddUserMessage("Write a sci-fi story opening.")
.Build();
Web API Usage
Hosted API (Recommended)
The AI Gateway is deployed and available at https://api.noundry.ai. See the Hosted API section above for usage.
Self-Hosting
To run your own instance:
cd src/Noundry.AIG.Api
dotnet run
The API will be available at https://localhost:5001 (or configured port).
API Endpoints
POST /v1/chat/completions
Send a chat completion request.
Headers:
Authorization: Bearer YOUR_GATEWAY_API_KEYX-API-KEY: Bearer YOUR_PROVIDER_API_KEY(optional)
Request Body:
{
"model": "anthropic/claude-sonnet-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Why is the sky blue?"
}
],
"temperature": 0.7,
"max_tokens": 1000,
"stream": false
}
Multi-Provider Request:
{
"model": "openai/gpt-4",
"order": ["openai/gpt-4", "anthropic/claude-sonnet-4", "google/gemini-pro"],
"messages": [
{
"role": "user",
"content": "What are the three primary colors?"
}
]
}
GET /v1/analytics/logs
Get recent request logs.
GET /v1/analytics/usage
Get provider usage statistics.
cURL Examples
Simple Request:
curl -X POST "https://api.noundry.ai/v1/chat/completions" \
-H "Authorization: Bearer demo-key-12345" \
-H "X-API-KEY: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4",
"messages": [
{
"role": "user",
"content": "Why is the sky blue?"
}
],
"stream": false
}'
Multi-Provider with Fallback:
curl -X POST "https://api.noundry.ai/v1/chat/completions" \
-H "Authorization: Bearer demo-key-12345" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4",
"order": ["openai/gpt-4", "anthropic/claude-sonnet-4"],
"messages": [
{
"role": "user",
"content": "What are the three primary colors?"
}
]
}'
Configuration
appsettings.json
{
"ConnectionStrings": {
"Analytics": "Data Source=aigw_analytics.db"
},
"ApiKeys": [
"demo-key-12345",
"your-api-key-here"
],
"Providers": {
"openai": {
"ApiKey": "YOUR_OPENAI_API_KEY",
"TimeoutSeconds": "120"
},
"anthropic": {
"ApiKey": "YOUR_ANTHROPIC_API_KEY",
"ApiVersionHeaderValue": "2023-06-01",
"TimeoutSeconds": "120"
},
"google": {
"ApiKey": "YOUR_GOOGLE_API_KEY",
"TimeoutSeconds": "120"
}
}
}
Architecture
Core Components
- Noundry.AIG.Core: Shared models, interfaces, and extensions
- Noundry.AIG.Providers: Provider-specific implementations
OpenAiProvider: OpenAI GPT modelsAnthropicProvider: Anthropic Claude modelsGoogleProvider: Google Gemini models
- Noundry.AIG.Client: Thread-safe client library with builders
- Noundry.AIG.Api: ASP.NET Core Web API gateway
Key Design Patterns
- Factory Pattern:
ProviderFactoryfor creating provider instances - Builder Pattern:
PromptBuilderandChainPromptBuilderfor fluent API - Repository Pattern:
IAnalyticsRepositoryfor data access - Strategy Pattern: Provider implementations via
IAiProviderinterface
Data Access with Tuxedo
The API uses Noundry Tuxedo (Dapper-based ORM) for analytics and logging:
builder.Services.AddScoped<IDbConnection>(sp =>
{
var connectionString = builder.Configuration.GetConnectionString("Analytics");
var connection = new SqliteConnection(connectionString);
connection.Open();
return connection;
});
builder.Services.AddScoped<IAnalyticsRepository, AnalyticsRepository>();
Supported Models
OpenAI
- GPT-4 Turbo
- GPT-4
- GPT-3.5 Turbo
- All OpenAI chat models
Anthropic
- Claude 3.5 Sonnet
- Claude 3 Opus
- Claude 3 Sonnet
- Claude 3 Haiku
- Gemini Pro
- Gemini Pro Vision
- Gemini Ultra
Custom OpenAI-Compatible Endpoints
The OpenAI provider supports custom endpoints for services like Nebius, Together AI, Groq, Ollama, and other OpenAI-compatible APIs.
Configuration
var options = new AigClientOptions
{
ProviderConfigs = new Dictionary<string, ProviderConfig>
{
["openai"] = new ProviderConfig
{
ApiKey = "your-api-key",
ApiEndpoint = "https://api.tokenfactory.nebius.com/v1/chat/completions"
}
}
};
Model Name Handling
When using custom endpoints, the full model name is preserved (after stripping the routing prefix):
| Request Model | Sent to API |
|---|---|
openai/openai/gpt-oss-120b |
openai/gpt-oss-120b |
openai/llama-3-70b |
llama-3-70b |
openai/gpt-4o |
gpt-4o |
For standard OpenAI (api.openai.com), the provider prefix is always stripped:
openai/gpt-4o→gpt-4o
Supported Custom Endpoints
| Provider | Base URL | Example Model |
|---|---|---|
| Nebius | https://api.tokenfactory.nebius.com/v1 |
openai/gpt-oss-120b |
| Together AI | https://api.together.xyz/v1 |
meta-llama/Llama-3-70b-chat-hf |
| Groq | https://api.groq.com/openai/v1 |
llama3-70b-8192 |
| Ollama | http://localhost:11434/v1 |
llama3 |
| LM Studio | http://localhost:1234/v1 |
local-model |
Direct Config Usage
For custom providers, pass the config directly to avoid routing lookup issues:
var config = new ProviderConfig
{
ApiKey = "your-key",
ApiEndpoint = "https://api.tokenfactory.nebius.com/v1/chat/completions"
};
var prompt = new PromptBuilder()
.WithModel("openai/openai/gpt-oss-120b") // Full model path
.AddUserMessage("Hello!")
.Build();
// Pass config directly
await foreach (var chunk in aigClient.SendStreamAsync(prompt, config))
{
Console.Write(chunk.GetTextContent());
}
Extending with New Providers
To add a new provider:
- Create a new provider class inheriting from
BaseAiProvider - Implement provider-specific request/response transformation
- Register in
ProviderFactory
public class NewProvider : BaseAiProvider
{
public override string ProviderName => "newprovider";
public override string GetDefaultEndpoint() => "https://api.newprovider.com";
public override async Task<AiResponse> SendCompletionAsync(...)
{
// Implementation
}
}
Performance Considerations
- Uses
HttpClientFactoryfor efficient HTTP connection management - Thread-safe operations with
SemaphoreSlim - Configurable retry logic with exponential backoff
- Streaming support to reduce latency
- Connection pooling via
SqliteConnection
Security
- API key authentication via middleware
- Provider API keys stored in configuration (use Azure Key Vault in production)
- No API keys logged in analytics
- HTTPS enforcement
Analytics & Monitoring
Built-in request logging tracks:
- Provider and model used
- Token usage (input, output, total)
- Request duration
- Success/failure status
- Client IP address
Access via:
- GET
/v1/analytics/logs- Recent request logs - GET
/v1/analytics/usage?daysSince=7- Provider usage stats
Roadmap
- Additional provider support (Mistral, Groq, Cohere, xAI)
- Rate limiting per API key
- Response caching layer
- Cost tracking and budgets
- Webhooks for async requests
- Admin dashboard
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
License
MIT License - see LICENSE file for details.
Support
For issues, questions, or feature requests, please open an issue on GitHub.
Built with .NET 9.0 / .NET 10.0 | Powered by Noundry Tuxedo | Thread-Safe | Production-Ready
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net9.0 is compatible. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- Noundry.AIG.Core (>= 1.1.1)
- Noundry.AIG.Providers (>= 1.1.1)
-
net9.0
- Noundry.AIG.Core (>= 1.1.1)
- Noundry.AIG.Providers (>= 1.1.1)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.