LocalAI.NET 1.0.1

dotnet add package LocalAI.NET --version 1.0.1                
NuGet\Install-Package LocalAI.NET -Version 1.0.1                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="LocalAI.NET" Version="1.0.1" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add LocalAI.NET --version 1.0.1                
#r "nuget: LocalAI.NET, 1.0.1"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install LocalAI.NET as a Cake Addin
#addin nuget:?package=LocalAI.NET&version=1.0.1

// Install LocalAI.NET as a Cake Tool
#tool nuget:?package=LocalAI.NET&version=1.0.1                

LocalAI.NET (In Progress)

NuGet NuGet Downloads License .NET Tests

A .NET client library for interacting with local LLM providers like KoboldCpp, Ollama, LM Studio, and Text Generation WebUI. This library provides a simple, efficient, and cross-platform way to use local AI models in your .NET applications.

📦 View Package on NuGet

Alternative To

  • OpenAI.NET SDK (for those who want local/offline alternatives)
  • LLamaSharp (simpler API, multiple provider support)
  • OllamaSharp (supports multiple providers beyond Ollama)

Features

  • Unified API for multiple local AI providers:
    • KoboldCpp (Native and OpenAI-compatible modes)
    • Ollama
    • LM Studio
    • Text Generation WebUI
  • Streaming and non-streaming text completion
  • Progress tracking and error handling
  • Configurable retry policies and circuit breakers
  • Full async/await support
  • Comprehensive logging capabilities
  • Built-in fault tolerance

Installation

Install LocalAI.NET via NuGet:

dotnet add package LocalAI.NET

Quick Start

using LocalAI.NET.Client;
using LocalAI.NET.Models.Configuration;

// Create client with KoboldCpp provider
var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:5000",
    ProviderOptions = new KoboldCppNativeOptions
    {
        ContextSize = 2048,
        UseGpu = true,
        RepetitionPenalty = 1.1f
    }
};

using var client = new LocalAIClient(options);

// Generate text completion
string response = await client.CompleteAsync("Write a short story about a robot:");

// Stream completion tokens
await foreach (var token in client.StreamCompletionAsync("Once upon a time..."))
{
    Console.Write(token);
}

// List available models
var models = await client.GetAvailableModelsAsync();
foreach (var model in models)
{
    Console.WriteLine($"Model: {model.Name} (Provider: {model.Provider})");
    Console.WriteLine($"Context Length: {model.Capabilities.MaxContextLength}");
}

Provider Configuration

KoboldCpp (Native)

var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:5000",
    ProviderOptions = new KoboldCppNativeOptions
    {
        ContextSize = 2048,
        UseGpu = true,
        RepetitionPenalty = 1.1f,
        RepetitionPenaltyRange = 320,
        TrimStop = true,
        Mirostat = new MirostatSettings
        {
            Mode = 2,
            Tau = 5.0f,
            Eta = 0.1f
        }
    }
};

KoboldCpp (OpenAI-compatible)

var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:5000",
    ProviderOptions = new KoboldCppOpenAiOptions
    {
        ContextSize = 2048,
        UseGpu = true,
        ModelName = "koboldcpp",
        UseChatCompletions = true
    }
};

Ollama

var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:11434",
    ProviderOptions = new OllamaOptions
    {
        ConcurrentRequests = 1
    }
};

LM Studio

var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:1234",
    ProviderOptions = new LMStudioOptions
    {
        UseOpenAIEndpoint = true
    }
};

Text Generation WebUI

var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:7860",
    ProviderOptions = new TextGenWebOptions
    {
        UseOpenAIEndpoint = true
    }
};

Completion Options

var options = new CompletionOptions
{
    ModelName = "wizardLM",         // Optional model name
    MaxTokens = 200,                // Max tokens to generate
    Temperature = 0.7f,             // Randomness (0.0-1.0)
    TopP = 0.9f,                    // Nucleus sampling threshold
    StopSequences = new[] { "\n" }  // Sequences that stop generation
};

string response = await client.CompleteAsync("Your prompt here", options);

Progress Tracking

client.OnProgress += (progress) =>
{
    switch (progress.State)
    {
        case LocalAIProgressState.Starting:
            Console.WriteLine("Starting completion...");
            break;
        case LocalAIProgressState.Processing:
            Console.WriteLine($"Processing: {progress.Message}");
            break;
        case LocalAIProgressState.Streaming:
            Console.WriteLine("Receiving tokens...");
            break;
        case LocalAIProgressState.Complete:
            Console.WriteLine("Completion finished!");
            break;
        case LocalAIProgressState.Failed:
            Console.WriteLine($"Error: {progress.Message}");
            break;
    }
};

Error Handling

try
{
    var response = await client.CompleteAsync("Test prompt");
}
catch (LocalAIException ex)
{
    Console.WriteLine($"LocalAI API error: {ex.Message}");
    if (ex.StatusCode.HasValue)
    {
        Console.WriteLine($"Status code: {ex.StatusCode}");
    }
    if (ex.Provider != null)
    {
        Console.WriteLine($"Provider: {ex.Provider}");
    }
}
catch (Exception ex)
{
    Console.WriteLine($"General error: {ex.Message}");
}

Advanced Configuration

var options = new LocalAIOptions
{
    BaseUrl = "http://localhost:5000",
    ApiKey = "optional_api_key",
    Timeout = TimeSpan.FromMinutes(2),
    MaxRetryAttempts = 3,
    RetryDelay = TimeSpan.FromSeconds(2),
    Logger = loggerInstance,
    JsonSettings = new JsonSerializerSettings(),
    ProviderOptions = new KoboldCppNativeOptions()
};

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Please see CONTRIBUTING.md for details on:

  • How to publish to NuGet
  • Development guidelines
  • Code style
  • Testing requirements
  • Pull request process

Support

For issues and feature requests, please use the GitHub issues page.

Product Compatible and additional computed target framework versions.
.NET net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.0.1 60 12/29/2024
1.0.0 54 12/29/2024