SignalR.Kafka 1.0.2

There is a newer version of this package available.
See the version list below for details.
dotnet add package SignalR.Kafka --version 1.0.2
NuGet\Install-Package SignalR.Kafka -Version 1.0.2
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="SignalR.Kafka" Version="1.0.2" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add SignalR.Kafka --version 1.0.2
#r "nuget: SignalR.Kafka, 1.0.2"
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install SignalR.Kafka as a Cake Addin
#addin nuget:?package=SignalR.Kafka&version=1.0.2

// Install SignalR.Kafka as a Cake Tool
#tool nuget:?package=SignalR.Kafka&version=1.0.2

SignalR.Kafka

An Apache Kafka backplane for ASP.NET Core SignalR

This project is largely based off of a fork of the SignalR Core Redis provider. It uses Kafka as the backplane to send SignalR messages between multiple servers, allowing for horizontal scaling of the SignalR implementation. The Confluent Kafka dotnet client is leveraged for publishing and consuming messages.

Kafka Configuration

Kafka topics are created automatically during startup if they don't yet exist. If not specified, the default topic configuration uses 10 partitions and a replication factor of 1.

Topics may also be manually created in Kafka prior to running. The following schema is used. A partitioning strategy may be designed based off the key information provided for each topic.

  • (prefix-)ack: Acknowledge group management messages: Key is server unique name
  • (prefix-)group-mgmt: Group management messages (add/removal of connection from group): Key is server unique name
  • (prefix-)send-all: Messages intended for all connected clients: No key is used, messages will be delivered round robin to partitions
  • (prefix-)send-conn: Messages intended for a specific client connection. Key is connection id
  • (prefix-)send-group: Messages intended for a specific group. Key is group name
  • (prefix-)send-user: Messages intended for a specific user. Key is user id

Usage

  1. Install the Ascentis.SignalR.Kafka NuGet package.
  2. In ConfigureServices in Startup.cs, configure SignalR with .AddKafka():
.AddSignalR()
.AddKafka((options) =>
{
    options.ConsumerConfig = new ConsumerConfig
    {
        GroupId = $"{Environment.MachineName}_{Guid.NewGuid():N}",
        BootstrapServers = bootstrapServers,
        AutoOffsetReset = AutoOffsetReset.Latest,
        EnableAutoCommit = true
    };
    options.ProducerConfig = new ProducerConfig
    {
        BootstrapServers = bootstrapServers,
        ClientId = $"{Environment.MachineName}_{Guid.NewGuid():N}"
    };
});

The configuration for producer and consumer must be specified with options.ConsumerConfig and options.ProducerConfig. A topic prefix may be configured thru the KafkaTopicConfig object to allow for multiple instances of the schema on a single Kafka deployment:

.AddKafka((options) =>
{
    options.KafkaTopicConfig = new KafkaTopicConfig(topicPrefix: "my-prefix");
});

The KafkaTopicConfig may also be used for specifying initial topic creation specifications for each topic in the schema:

.AddKafka((options) =>
{
    options.KafkaTopicConfig = new KafkaTopicConfig(
        ackSpecification: new KafkaTopicSpecification
        {
            ReplicationFactor = 1,
            NumPartitions = 10
        },
        groupManagementSpecification: new KafkaTopicSpecification
        {
            ReplicationFactor = 1,
            NumPartitions = 10
        });
});

Performance Considerations

The Confluent Kafka client producer accumulates messages internally and sends them in batches. This supports high overall throughput, but sacrifices latency for individual message delivery. The latency and buffer sizes are configurable using Kafka ProducerConfig. See the Confluent documentation for more information.

By default, produce is called asyncrounously and the SignalR action (Send) returns prior to message delivery to the Kafka server. This allows for the highest throughput but comes with some risk that message delivery to the server may fail silently from client perspective (exceptions will still be logged). The configuration options allow changing the behavior to syncronously await produce call completion:

.AddKafka((options) =>
{
    options.AwaitProduce = true;
});

Any exceptions during the produce operation should bubble up to the client when AwaitProduce is true. However, this configuration will dramatically reduce throughput. See the Confluent documentation for more information on typical producer usage patterns.

Product Compatible and additional computed target framework versions.
.NET net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
1.0.7 3,098 7/18/2022
1.0.6 631 6/24/2022
1.0.5 616 1/26/2022
1.0.4 587 1/19/2022
1.0.3 636 1/19/2022
1.0.2 607 1/19/2022