Forge.OpenAI 1.0.1

There is a newer version of this package available.
See the version list below for details.
dotnet add package Forge.OpenAI --version 1.0.1
NuGet\Install-Package Forge.OpenAI -Version 1.0.1
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Forge.OpenAI" Version="1.0.1" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Forge.OpenAI --version 1.0.1
#r "nuget: Forge.OpenAI, 1.0.1"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Forge.OpenAI as a Cake Addin
#addin nuget:?package=Forge.OpenAI&version=1.0.1

// Install Forge.OpenAI as a Cake Tool
#tool nuget:?package=Forge.OpenAI&version=1.0.1

Forge.OpenAI

OpenAI API client library for .NET. This is not an official library, I was developed it for myself, for public and it is free to use. Supported .NET versions:

x >= v4.6.1,

x >= Netstandard 2.0,

x >= dotNetCore 3.1,

.NET 6.0,

.NET 7.0

Installing

To install the package add the following line to you csproj file replacing x.x.x with the latest version number:

<PackageReference Include="Forge.OpenAI" Version="x.x.x" />

You can also install via the .NET CLI with the following command:

dotnet add package Forge.OpenAI

If you're using Visual Studio you can also install via the built in NuGet package manager.

Setup

You should create an ApiKey to work with the OpenAI API.

If you do not have an account at OpenAI, create one here: https://platform.openai.com/

Than navigate to: https://platform.openai.com/account/api-keys

By default, this library uses Microsoft Dependency Injection, however it is not necessary.

You can register the client services with the service collection in your Startup.cs / Program.cs file in your application.

public void ConfigureServices(IServiceCollection services)
{
    services.AddForgeOpenAI(options => {
        options.AuthenticationInfo = Configuration["OpenAI:ApiKey"]!;
    });
}

Or in your Program.cs file.

public static async Task Main(string[] args)
{
    var builder = WebAssemblyHostBuilder.CreateDefault(args);
    builder.RootComponents.Add<App>("app");

    builder.Services.AddForgeOpenAI(options => {
        options.AuthenticationInfo = builder.Configuration["OpenAI:ApiKey"]!;
    });

    await builder.Build().RunAsync();
}

Or

public static async Task Main(string[] args)
{
    using var host = Host.CreateDefaultBuilder(args)
        .ConfigureServices((builder, services) =>
        {
            services.AddForgeOpenAI(options => {
                options.AuthenticationInfo = builder.Configuration["OpenAI:ApiKey"]!;
            });
        })
        .Build();
}

You should provide your OpenAI API key and optionally your organization to boot up the service. If you do not provide it in the configuration, service automatically lookup the necessary information in your environment variables, in a Json file (.openai) or in an environment file (.env).

Example for environment variables:

OPENAI_KEY or OPENAI_API_KEY or OPENAI_SECRET_KEY or TEST_OPENAI_SECRET_KEY are checked for the API key

ORGANIZATION key checked for the organzation

Example for Json file:

{ "apikey": "your_api_key", "organization": "organization_id" }

Environment file must contains key/value pairs in this format {key}={value}

For the 'key', use one of the same value which described in Environment Variables above.

Example for environment file:

OPENAI_KEY=your_api_key

ORGANIZATION=optionally_your_organization

Options

OpenAI and the dependent services require OpenAIOptions, which can be provided manually or it will happened, if you use dependency injection. If you need to use multiple OpenAI service instances at the same time, you should provide this options individually with different settings and authentication credentials.

In the options there are many Uri settings, which was not touched normally. The most important option is the AuthenticationInfo property, which contains the ApiKey and and Organization Id.

Also, there is an additional option, called HttpMessageHandlerFactory, which constructs the HttpMessageHandler for the HttpClient in some special cases, for example, if you want to override some behavior of the HttpClient.

There is a built-in logging feature, just for testing and debugging purposes, called LogRequestsAndResponses, which persists all of requests and responses in a folder (LogRequestsAndResponsesFolder). With this feature, you can check the low level messages. I do not recommend to use it in production environment.

Examples

If you would like to learn more about the API capabilities, please visit https://platform.openai.com/docs/api-reference If you need to generate an API key, please visit: https://platform.openai.com/account/api-keys

I have created a playground, which is part of this solution. It covers all of the features, which this library provides. Feel free to run through these examples and play with the settings.

Also here is the OpenAI playground, where you can also find examples about the usage: https://platform.openai.com/playground/p/default-chat?lang=node.js&mode=complete&model=text-davinci-003

Example - Text completion 1.

The next code demonstrates how to give a simple instruction (prompt). The whole answer generated on the OpenAI side remotelly, than the answer will send in a response.

public static async Task Main(string[] args)
{
    using var host = Host.CreateDefaultBuilder(args)
        .ConfigureServices((builder, services) =>
        {
            services.AddForgeOpenAI(options => {
                options.AuthenticationInfo = builder
                    .Configuration["OpenAI:ApiKey"]!;
            });
        })
        .Build();

    IOpenAIService openAi = host.Services.GetService<IOpenAIService>()!;

    // in this scenario the answer generated on server side, 
    // than the whole text will be sent in one pass.
    // this method is useful for small conversatons and for short answers

    TextCompletionRequest request = new TextCompletionRequest();
    request.Prompt = "Say this is a test";

    HttpOperationResult<TextCompletionResponse> response = 
        await openAi.TextCompletionService
            .GetAsync(request, CancellationToken.None)
                .ConfigureAwait(false);

    if (response.IsSuccess)
    {
        response.Result!.Completions.ForEach(c => Console.WriteLine(c.Text));

        request.Prompt = "Are you sure?";

        response = await openAi.TextCompletionService
            .GetAsync(request, CancellationToken.None).ConfigureAwait(false);

        if (response.IsSuccess)
        {
            response.Result!.Completions.ForEach(c => Console.WriteLine(c.Text));
        }
        else
        {
            Console.WriteLine(response);
        }
    }
    else
    {
        Console.WriteLine(response);
    }

}

Example - Text completion 2.

The next example demonstrates, how you can receive an answer in streamed mode. Streamed mode means, you will get the generated answer in pieces and not in one packages like in the previous example. Because of generating an answer takes time, it can be useful, if you see the result in the meantime. The process also can be cancelled.

This version works with a callback. It will be called each time, if a piece of answer arrived.

public static async Task Main(string[] args)
{
    using var host = Host.CreateDefaultBuilder(args)
        .ConfigureServices((builder, services) =>
        {
            services.AddForgeOpenAI(options => {
                options.AuthenticationInfo = builder
                    .Configuration["OpenAI:ApiKey"]!;
            });
        })
        .Build();

    IOpenAIService openAi = host.Services.GetService<IOpenAIService>()!;

    // this method is useful for older .NET where the IAsyncEnumerable is not supported,
    // or you just simply does not prefer this way

    TextCompletionRequest request = new TextCompletionRequest();
    request.Prompt = "Write a C# code which demonstrate how to open a text file and read its content";
    request.MaxTokens = 4096 - request.Prompt
        .Split(" ", StringSplitOptions.RemoveEmptyEntries).Length; // calculating max token
    request.Temperature = 0.1; // lower value means more precise answer

    Console.WriteLine(request.Prompt);

    Action<HttpOperationResult<TextCompletionResponse>> receivedDataHandler = 
        (HttpOperationResult<TextCompletionResponse> response) => 
    {
        if (response.IsSuccess)
        {
            Console.Write(response.Result?.Completions[0].Text);
        }
        else
        {
            Console.WriteLine(response);
        }
    };

    HttpOperationResult response = await openAi.TextCompletionService
        .GetStreamAsync(request, receivedDataHandler, CancellationToken.None)
            .ConfigureAwait(false);

    if (response.IsSuccess)
    {
        Console.WriteLine();
    }
    else
    {
        Console.WriteLine(response);
    }

}

Example - Text completion 3.

The last example in this topic demonstrates, how you can receive an answer in streamed mode also.

This version works with IAsyncEnumerable. It is not supported in older .NET versions.

public static async Task Main(string[] args)
{
    using var host = Host.CreateDefaultBuilder(args)
        .ConfigureServices((builder, services) =>
        {
            services.AddForgeOpenAI(options => {
                options.AuthenticationInfo = builder
                    .Configuration["OpenAI:ApiKey"]!;
            });
        })
        .Build();

    IOpenAIService openAi = host.Services.GetService<IOpenAIService>()!;

    TextCompletionRequest request = new TextCompletionRequest();
    request.Prompt = "Write a C# code which demonstrate how to write some text into file";
    request.MaxTokens = 4096 - request.Prompt
        .Split(" ", StringSplitOptions.RemoveEmptyEntries).Length; // calculating max token
    request.Temperature = 0.1; // lower value means more precise answer

    Console.WriteLine(request.Prompt);

    await foreach (HttpOperationResult<TextCompletionResponse> response in 
        openAi.TextCompletionService.GetStreamAsync(request, CancellationToken.None))
    {
        if (response.IsSuccess)
        {
            Console.Write(response.Result?.Completions[0].Text);
        }
        else
        {
            Console.WriteLine(response);
        }
    }

}

Example - Text edit

Edit a text means something like that we ask the model to fix an incorrect sentence for example.

public static async Task Main(string[] args)
{
    using var host = Host.CreateDefaultBuilder(args)
        .ConfigureServices((builder, services) =>
        {
            services.AddForgeOpenAI(options => {
                options.AuthenticationInfo = builder
                    .Configuration["OpenAI:ApiKey"]!;
            });
        })
        .Build();

    IOpenAIService openAi = host.Services.GetService<IOpenAIService>()!;

    TextEditRequest request = new TextEditRequest();
    request.InputTextForEditing = "Do you happy with your order?";
    request.Instruction = "Fix the grammar";

    Console.WriteLine(request.InputTextForEditing);
    Console.WriteLine(request.Instruction);

    HttpOperationResult<TextEditResponse> response = 
        await openAi.TextEditService.GetAsync(request, CancellationToken.None)
            .ConfigureAwait(false);

    if (response.IsSuccess)
    {
        // output: Are you happy with your order?
        response.Result!.Choices.ForEach(c => Console.WriteLine(c.Text));
    }
    else
    {
        Console.WriteLine(response);
    }

}
Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 is compatible. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 is compatible.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.5.2 42 6/13/2024
1.4.11 293 5/21/2024
1.4.8 65 5/14/2024
1.4.7 112 5/13/2024
1.4.6 130 5/12/2024
1.4.5 61,749 5/4/2024
1.4.4 19,486 4/27/2024
1.4.3 79 4/26/2024
1.4.2 75 4/26/2024
1.3.7 164 4/21/2024
1.3.6 70,801 3/22/2024
1.3.0 102,549 2/18/2024
1.2.0 28,058 12/10/2023
1.1.7 10,139 12/2/2023
1.1.6 111,036 10/13/2023
1.1.5 277,262 5/17/2023
1.1.4 23,472 5/2/2023
1.1.3 30,770 4/30/2023
1.1.2 15,294 4/16/2023
1.0.3 5,156 3/12/2023
1.0.2 1,063 3/10/2023
1.0.1 51,053 2/19/2023
1.0.0 1,023 2/16/2023