AssemblyAI 1.1.3
See the version list below for details.
dotnet add package AssemblyAI --version 1.1.3
NuGet\Install-Package AssemblyAI -Version 1.1.3
<PackageReference Include="AssemblyAI" Version="1.1.3" />
paket add AssemblyAI --version 1.1.3
#r "nuget: AssemblyAI, 1.1.3"
// Install AssemblyAI as a Cake Addin #addin nuget:?package=AssemblyAI&version=1.1.3 // Install AssemblyAI as a Cake Tool #tool nuget:?package=AssemblyAI&version=1.1.3
<img src="https://github.com/AssemblyAI/assemblyai-python-sdk/blob/master/assemblyai.png?raw=true" width="500" alt="AssemblyAI logo"/>
AssemblyAI C# .NET SDK
The AssemblyAI C# SDK provides an easy-to-use interface for interacting with the AssemblyAI API from .NET, which supports async and real-time transcription, as well as the latest audio intelligence and LeMUR models.
The C# SDK is compatible with .NET 6.0
and up, .NET Framework 4.6.2
and up, and .NET Standard 2.0
.
Documentation
Visit the AssemblyAI documentation for step-by-step instructions and a lot more details about our AI models and API. Explore the SDK API reference for more details on the SDK types, functions, and classes.
Quickstart
You can find the AssemblyAI
C# SDK on NuGet.
Add the latest version using the .NET CLI:
dotnet add package AssemblyAI
Then, create an AssemblyAIClient with your API key:
using AssemblyAI;
var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
You can now use the client
object to interact with the AssemblyAI API.
Add the AssemblyAIClient to the dependency injection container
The AssemblyAI SDK has built-in support for default .NET dependency injection container.
Add the AssemblyAIClient
to the service collection like this:
using AssemblyAI;
// build your services
services.AddAssemblyAIClient();
By default, the AssemblyAIClient
loads it configuration from the AssemblyAI
section from the .NET configuration.
{
"AssemblyAI": {
"ApiKey": "YOUR_ASSEMBLYAI_API_KEY"
}
}
You can also configure the AssemblyAIClient
other ways using the AddAssemblyAIClient
overloads.
using AssemblyAI;
// build your services
services.AddAssemblyAIClient(options =>
{
options.ApiKey = Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!;
});
Speech-To-Text
Transcribe audio and video files
<details open> <summary>Transcribe an audio file with a public URL</summary>
When you create a transcript, you can either pass in a URL to an audio file or upload a file directly.
using AssemblyAI;
using AssemblyAI.Transcripts;
var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
// Transcribe file at remote URL
var transcript = await client.Transcripts.TranscribeAsync(new TranscriptParams
{
AudioUrl = "https://storage.googleapis.com/aai-web-samples/espn-bears.m4a",
LanguageCode = TranscriptLanguageCode.EnUs
});
// checks if transcript.Status == TranscriptStatus.Completed, throws an exception if not
transcript.EnsureStatusCompleted();
Console.WriteLine(transcript.Text);
TranscribeAsync
queues a transcription job and polls it until the transcript.Status
is completed
or error
.
If you don't want to wait until the transcript is ready, you can use submit
:
transcript = await client.Transcripts.SubmitAsync(new TranscriptParams
{
AudioUrl = "https://storage.googleapis.com/aai-web-samples/espn-bears.m4a",
LanguageCode = TranscriptLanguageCode.EnUs
});
</details>
<details> <summary>Transcribe a local audio file</summary>
When you create a transcript, you can either pass in a URL to an audio file or upload a file directly.
using AssemblyAI;
using AssemblyAI.Transcripts;
var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
// Transcribe file using file info
var transcript = await client.Transcripts.TranscribeAsync(
new FileInfo("./news.mp4"),
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);
// Transcribe file from stream
await using var stream = new FileStream("./news.mp4", FileMode.Open);
transcript = await client.Transcripts.TranscribeAsync(
stream,
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);
transcribe
queues a transcription job and polls it until the status
is completed
or error
.
If you don't want to wait until the transcript is ready, you can use submit
:
transcript = await client.Transcripts.SubmitAsync(
new FileInfo("./news.mp4"),
new TranscriptOptionalParams
{
LanguageCode = TranscriptLanguageCode.EnUs
}
);
</details>
<details> <summary>Enable additional AI models</summary>
You can extract even more insights from the audio by enabling any of our AI models using transcription options. For example, here's how to enable Speaker diarization model to detect who said what.
var transcript = await client.Transcripts.TranscribeAsync(new TranscriptParams
{
AudioUrl = "https://storage.googleapis.com/aai-web-samples/espn-bears.m4a",
SpeakerLabels = true
});
// checks if transcript.Status == TranscriptStatus.Completed, throws an exception if not
transcript.EnsureStatusCompleted();
foreach (var utterance in transcript.Utterances)
{
Console.WriteLine($"Speaker {utterance.Speaker}: {utterance.Text}");
}
</details>
<details> <summary>Get a transcript</summary>
This will return the transcript object in its current state. If the transcript is still processing, the Status
field will be Queued
or Processing
. Once the transcript is complete, the Status
field will be Completed
.
var transcript = await client.Transcripts.GetAsync(transcript.Id);
If you created a transcript using .SubmitAsync(...)
, you can still poll until the transcript Status
is Completed
or Error
using .WaitUntilReady(...)
:
transcript = await client.Transcripts.WaitUntilReady(
transcript.Id,
pollingInterval: TimeSpan.FromSeconds(1),
pollingTimeout: TimeSpan.FromMinutes(10)
);
</details> <details> <summary>Get sentences and paragraphs</summary>
var sentences = await client.Transcripts.GetSentencesAsync(transcript.Id);
var paragraphs = await client.Transcripts.GetParagraphsAsync(transcript.Id);
</details>
<details> <summary>Get subtitles</summary>
const int charsPerCaption = 32;
var srt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Srt);
srt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Srt, charsPerCaption: charsPerCaption);
var vtt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Vtt);
vtt = await client.Transcripts.GetSubtitlesAsync(transcript.Id, SubtitleFormat.Vtt, charsPerCaption: charsPerCaption);
</details> <details> <summary>List transcripts</summary>
This will return a page of transcripts you created.
var page = await client.Transcripts.ListAsync();
You can also paginate over all pages.
var page = await client.Transcripts.ListAsync();
while(page.PageDetails.PrevUrl != null)
{
page = await client.Transcripts.ListAsync(page.PageDetails.PrevUrl);
}
[!NOTE] To paginate over all pages, you need to use the
page.PageDetails.PrevUrl
because the transcripts are returned in descending order by creation date and time. The first page is are the most recent transcript, and each "previous" page are older transcripts.
</details>
<details> <summary>Delete a transcript</summary>
var transcript = await client.Transcripts.DeleteAsync(transcript.Id);
</details>
Transcribe in real-time
Create the real-time transcriber.
using AssemblyAI;
using AssemblyAI.Realtime;
var client = new AssemblyAIClient(Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY")!);
await using var transcriber = client.Realtime.CreateTranscriber();
You can also pass in the following options.
using AssemblyAI;
using AssemblyAI.Realtime;
await using var transcriber = client.Realtime.CreateTranscriber(new RealtimeTranscriberOptions
{
// If ApiKey is null, the API key passed to `AssemblyAIClient` will be
ApiKey: Environment.GetEnvironmentVariable("ASSEMBLYAI_API_KEY"),
RealtimeUrl = "wss://localhost/override",
SampleRate = 16_000,
WordBoost = new[] { "foo", "bar" }
});
[!WARNING] Storing your API key in client-facing applications exposes your API key. Generate a temporary auth token on the server and pass it to your client. Server code:
var token = await client.Realtime.CreateTemporaryTokenAsync(expiresIn: 60); // TODO: return token to client
Client code:
using AssemblyAI; using AssemblyAI.Realtime; var token = await GetToken(); await using var transcriber = new RealtimeTranscriber { Token = token.Token };
You can configure the following events.
transcriber.SessionBegins.Subscribe(
message => Console.WriteLine(
$"""
Session begins:
- Session ID: {message.SessionId}
- Expires at: {message.ExpiresAt}
""")
);
transcriber.PartialTranscriptReceived.Subscribe(
transcript => Console.WriteLine("Partial transcript: {0}", transcript.Text)
);
transcriber.FinalTranscriptReceived.Subscribe(
transcript => Console.WriteLine("Final transcript: {0}", transcript.Text)
);
transcriber.TranscriptReceived.Subscribe(
transcript => Console.WriteLine("Transcript: {0}", transcript.Match(
partialTranscript => partialTranscript.Text,
finalTranscript => finalTranscript.Text
))
);
transcriber.ErrorReceived.Subscribe(
error => Console.WriteLine("Error: {0}", error)
);
transcriber.Closed.Subscribe(
closeEvt => Console.WriteLine("Closed: {0} - {1}", closeEvt.Code, closeEvt.Reason)
);
After configuring your events, connect to the server.
await transcriber.ConnectAsync();
Send audio data via chunks.
// Pseudo code for getting audio
GetAudio(audioChunk => {
transcriber.SendAudio(audioChunk);
});
Close the connection when you're finished.
await transcriber.CloseAsync();
Apply LLMs to your audio with LeMUR
Call LeMUR endpoints to apply LLMs to your transcript.
<details open> <summary>Prompt your audio with LeMUR</summary>
var response = await client.Lemur.TaskAsync(new LemurTaskParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"],
Prompt = "Write a haiku about this conversation.",
});
</details>
<details> <summary>Summarize with LeMUR</summary>
var response = await client.Lemur.SummaryAsync(new LemurSummaryParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"],
AnswerFormat = "one sentence",
Context = new Dictionary<string, object?>
{
["Speaker"] = new[] { "Alex", "Bob" }
}
});
</details>
<details> <summary>Ask questions</summary>
var response = await client.Lemur.QuestionAnswerAsync(new LemurQuestionAnswerParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"],
Questions = [
new LemurQuestion
{
Question = "What are they discussing?",
AnswerFormat = "text"
}
]
});
</details> <details> <summary>Generate action items</summary>
var response = await client.Lemur.ActionItemsAsync(new LemurActionItemsParams
{
TranscriptIds = ["0d295578-8c75-421a-885a-2c487f188927"]
});
</details> <details> <summary>Delete LeMUR request</summary>
var response = await client.Lemur.PurgeRequestDataAsync(lemurResponse.RequestId);
</details>
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 is compatible. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 is compatible. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETFramework 4.6.2
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.1)
- Microsoft.IO.RecyclableMemoryStream (>= 3.0.1)
- OneOf (>= 3.0.271)
- OneOf.Extended (>= 3.0.271)
- Portable.System.DateTimeOnly (>= 8.0.1)
- Riok.Mapperly (>= 3.6.0)
- System.Text.Json (>= 8.0.4)
-
.NETStandard 2.0
- Microsoft.Extensions.DependencyInjection (>= 8.0.0)
- Microsoft.Extensions.Http (>= 8.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.1)
- Microsoft.Extensions.Options (>= 8.0.2)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 8.0.0)
- Microsoft.IO.RecyclableMemoryStream (>= 3.0.1)
- OneOf (>= 3.0.271)
- OneOf.Extended (>= 3.0.271)
- Portable.System.DateTimeOnly (>= 8.0.1)
- Riok.Mapperly (>= 3.6.0)
- System.Text.Json (>= 8.0.4)
-
net6.0
- Microsoft.Extensions.DependencyInjection (>= 8.0.0)
- Microsoft.Extensions.Http (>= 8.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.1)
- Microsoft.Extensions.Options (>= 8.0.2)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 8.0.0)
- Microsoft.IO.RecyclableMemoryStream (>= 3.0.1)
- OneOf (>= 3.0.271)
- OneOf.Extended (>= 3.0.271)
- Riok.Mapperly (>= 3.6.0)
- System.Text.Json (>= 8.0.4)
-
net7.0
- Microsoft.Extensions.DependencyInjection (>= 8.0.0)
- Microsoft.Extensions.Http (>= 8.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.1)
- Microsoft.Extensions.Options (>= 8.0.2)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 8.0.0)
- Microsoft.IO.RecyclableMemoryStream (>= 3.0.1)
- OneOf (>= 3.0.271)
- OneOf.Extended (>= 3.0.271)
- Riok.Mapperly (>= 3.6.0)
- System.Text.Json (>= 8.0.4)
-
net8.0
- Microsoft.Extensions.DependencyInjection (>= 8.0.0)
- Microsoft.Extensions.Http (>= 8.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.1)
- Microsoft.Extensions.Options (>= 8.0.2)
- Microsoft.Extensions.Options.ConfigurationExtensions (>= 8.0.0)
- Microsoft.IO.RecyclableMemoryStream (>= 3.0.1)
- OneOf (>= 3.0.271)
- OneOf.Extended (>= 3.0.271)
- Riok.Mapperly (>= 3.6.0)
- System.Text.Json (>= 8.0.4)
NuGet packages (1)
Showing the top 1 NuGet packages that depend on AssemblyAI:
Package | Downloads |
---|---|
AssemblyAI.SemanticKernel
Transcribe audio using AssemblyAI with Semantic Kernel plugins |
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
1.2.0 | 942 | 11/6/2024 |
1.1.4 | 2,266 | 10/8/2024 |
1.1.3 | 474 | 9/13/2024 |
1.1.2 | 116 | 9/13/2024 |
1.1.1 | 199 | 9/4/2024 |
1.1.0 | 159 | 8/26/2024 |
1.0.1 | 343 | 8/15/2024 |
1.0.0 | 589 | 8/14/2024 |
1.0.0-beta3 | 106 | 8/12/2024 |
1.0.0-beta2 | 102 | 8/9/2024 |
1.0.0-beta1 | 76 | 8/1/2024 |
1.0.0-beta | 72 | 8/1/2024 |
0.0.2-alpha | 96 | 7/16/2024 |
0.0.1-alpha | 105 | 6/21/2024 |