Nabs.Launchpad.Core.Ai
10.0.219
Prefix Reserved
dotnet add package Nabs.Launchpad.Core.Ai --version 10.0.219
NuGet\Install-Package Nabs.Launchpad.Core.Ai -Version 10.0.219
<PackageReference Include="Nabs.Launchpad.Core.Ai" Version="10.0.219" />
<PackageVersion Include="Nabs.Launchpad.Core.Ai" Version="10.0.219" />
<PackageReference Include="Nabs.Launchpad.Core.Ai" />
paket add Nabs.Launchpad.Core.Ai --version 10.0.219
#r "nuget: Nabs.Launchpad.Core.Ai, 10.0.219"
#:package Nabs.Launchpad.Core.Ai@10.0.219
#addin nuget:?package=Nabs.Launchpad.Core.Ai&version=10.0.219
#tool nuget:?package=Nabs.Launchpad.Core.Ai&version=10.0.219
Nabs Launchpad Core AI Library
The Nabs Launchpad Core AI library provides a simplified wrapper around Microsoft Semantic Kernel for managing AI chat sessions with OpenAI-compatible services. This library makes it easy to build conversational AI applications with support for system prompts, user messages, and streaming responses.
Key Features
- Simplified Chat Sessions: Easy-to-use service for managing AI chat conversations
- Semantic Kernel Integration: Built on Microsoft's Semantic Kernel framework
- Streaming Response Support: Handles streaming chat completions efficiently
- Chat History Management: Automatic tracking of conversation history
- OpenAI Compatible: Works with OpenAI and OpenAI-compatible endpoints (e.g., local AI models)
- Flexible Prompting: Support for system prompts and user messages
Core Components
AiSessionService
The main service class for managing AI chat sessions. Handles chat history, prompt management, and streaming response processing using Microsoft Semantic Kernel.
Usage Examples
Basic Chat Session
// Setup Semantic Kernel with OpenAI
var apiKey = "your-api-key";
var modelId = "gpt-4";
var builder = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
modelId: modelId,
apiKey: apiKey);
var kernel = builder.Build();
// Configure execution settings
var options = new OpenAIPromptExecutionSettings
{
ModelId = modelId,
Temperature = 0.7F,
MaxTokens = 2048,
TopP = 1.0F,
FrequencyPenalty = 0.0F,
PresencePenalty = 0.0F
};
// Create AI session service
var aiSessionService = new AiSessionService(kernel, options);
// Add system prompt to set behavior
aiSessionService.AddSystemPrompt("You are a helpful assistant that provides concise answers.");
// Add user message
aiSessionService.AddUserPrompt("What is the capital of France?");
// Process and get response
await aiSessionService.ProcessAsync();
// Access chat history to see the conversation
foreach (var message in aiSessionService.ChatHistory)
{
Console.WriteLine($"{message.Role}: {message.Content}");
}
Multi-Turn Conversation
var aiSessionService = new AiSessionService(kernel, options);
// Set initial context
aiSessionService.AddSystemPrompt("You are a coding assistant.");
// First exchange
aiSessionService.AddUserPrompt("How do I create a list in C#?");
await aiSessionService.ProcessAsync();
// Continue conversation
aiSessionService.AddUserPrompt("Can you show me an example with LINQ?");
await aiSessionService.ProcessAsync();
// Continue conversation
aiSessionService.AddUserPrompt("How do I filter that list?");
await aiSessionService.ProcessAsync();
// Full conversation history is maintained in ChatHistory
Console.WriteLine($"Total messages: {aiSessionService.ChatHistory.Count}");
Using Local AI Models (e.g., AI Foundry, LM Studio)
// Configure for local AI endpoint
var localEndpoint = new Uri("http://localhost:5273/v1", UriKind.Absolute);
var modelId = "Phi-4-mini-instruct";
var apiKey = "not-used"; // Many local endpoints don't require API keys
var builder = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
modelId: modelId,
apiKey: apiKey,
endpoint: localEndpoint);
var kernel = builder.Build();
var options = new OpenAIPromptExecutionSettings
{
ModelId = modelId,
Temperature = 0.7F,
MaxTokens = 2048
};
var aiSessionService = new AiSessionService(kernel, options);
aiSessionService.AddSystemPrompt("You are a helpful AI assistant.");
aiSessionService.AddUserPrompt("Tell me a joke.");
await aiSessionService.ProcessAsync();
Custom Temperature and Token Settings
// More creative responses (higher temperature)
var creativeOptions = new OpenAIPromptExecutionSettings
{
ModelId = "gpt-4",
Temperature = 1.2F, // More creative/random
MaxTokens = 4096,
TopP = 0.95F
};
// More deterministic responses (lower temperature)
var deterministicOptions = new OpenAIPromptExecutionSettings
{
ModelId = "gpt-4",
Temperature = 0.1F, // More focused/deterministic
MaxTokens = 1024,
TopP = 0.9F
};
API Reference
AiSessionService
Constructor
public AiSessionService(
Kernel kernel,
OpenAIPromptExecutionSettings options)
Creates a new AI session service with the specified kernel and execution settings.
Parameters:
kernel: The Semantic Kernel instance configured with a chat completion serviceoptions: OpenAI prompt execution settings (temperature, max tokens, etc.)
Properties
ChatHistory
public ChatHistory ChatHistory
Gets the chat history containing all messages (system, user, and assistant messages) in the conversation.
Methods
AddSystemPrompt
public void AddSystemPrompt(string systemPrompt)
Adds a system message to the chat history. System messages set the behavior and context for the AI assistant.
Parameters:
systemPrompt: The system prompt text
Usage:
aiSessionService.AddSystemPrompt("You are a helpful coding assistant specialized in C#.");
AddUserPrompt
public void AddUserPrompt(string userPrompt)
Adds a user message to the chat history.
Parameters:
userPrompt: The user's message text
Usage:
aiSessionService.AddUserPrompt("How do I implement dependency injection?");
ProcessAsync
public async Task ProcessAsync()
Processes the current chat history and gets streaming responses from the AI service. Assistant responses are automatically added to the chat history.
Returns: A task representing the asynchronous operation
Usage:
await aiSessionService.ProcessAsync();
Configuration Best Practices
Temperature Settings
- 0.0 - 0.3: Deterministic, focused responses (good for factual Q&A, code generation)
- 0.4 - 0.7: Balanced creativity and consistency (good for general conversation)
- 0.8 - 1.5: More creative and varied responses (good for brainstorming, creative writing)
Max Tokens
- Set based on your expected response length
- Consider your model's context window limit
- Common values: 1024 (short), 2048 (medium), 4096+ (long)
System Prompts
- Use system prompts to set the AI's role and behavior
- Include relevant context and constraints
- Keep system prompts concise but clear
Chat History
- The full conversation history is maintained automatically
- Consider clearing or managing history for long conversations to stay within token limits
- Access
ChatHistoryto review or log conversations
Streaming Response Handling
The service uses streaming responses for better performance and user experience:
- Responses arrive incrementally as they're generated
- Reduces perceived latency for long responses
- Automatically assembled into complete messages in ChatHistory
Error Handling
try
{
await aiSessionService.ProcessAsync();
}
catch (HttpRequestException ex)
{
// Handle network/connection errors
Console.WriteLine($"Connection error: {ex.Message}");
}
catch (Exception ex)
{
// Handle other errors
Console.WriteLine($"Error processing AI request: {ex.Message}");
}
Integration with Semantic Kernel
This library is built on Microsoft Semantic Kernel, which provides:
- Multi-provider support (OpenAI, Azure OpenAI, Hugging Face, etc.)
- Plugin system for extending AI capabilities
- Memory and state management
- Advanced prompt engineering features
For advanced scenarios, you can access the underlying Kernel instance through dependency injection or direct access.
Testing
The library includes unit tests demonstrating integration with:
- Local AI models (e.g., AI Foundry with Phi models)
- OpenAI-compatible endpoints
- System prompts and user messages
- Response processing and history tracking
Example Test Pattern
[Fact]
public async Task AiSession_BasicConversation_Success()
{
// Arrange
var kernel = CreateTestKernel();
var options = CreateTestOptions();
var service = new AiSessionService(kernel, options);
// Act
service.AddSystemPrompt("Test system prompt");
service.AddUserPrompt("Test user prompt");
await service.ProcessAsync();
// Assert
service.ChatHistory.Should().NotBeEmpty();
service.ChatHistory.Should().HaveCountGreaterThan(2); // System + User + Assistant
}
Dependencies
- Microsoft.SemanticKernel: For AI integration and chat completion services
Supported AI Providers
Through Semantic Kernel, this library supports:
- OpenAI (GPT-3.5, GPT-4, GPT-4 Turbo)
- Azure OpenAI Service
- Local AI models (via OpenAI-compatible APIs)
- Hugging Face models
- Custom AI endpoints
Target Framework
- .NET 10
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- Microsoft.SemanticKernel (>= 1.68.0)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated | |
|---|---|---|---|
| 10.0.219 | 87 | 1/5/2026 | |
| 10.0.218 | 82 | 1/4/2026 | |
| 10.0.217 | 96 | 1/4/2026 | |
| 10.0.216 | 109 | 1/4/2026 | |
| 10.0.215 | 98 | 1/4/2026 | |
| 10.0.214 | 97 | 1/1/2026 | |
| 10.0.213 | 144 | 1/1/2026 | |
| 10.0.212 | 99 | 1/1/2026 | |
| 10.0.211 | 96 | 12/31/2025 | |
| 10.0.210 | 97 | 12/30/2025 | |
| 10.0.209 | 96 | 12/30/2025 | |
| 10.0.208 | 99 | 12/30/2025 | |
| 10.0.207 | 96 | 12/29/2025 | |
| 10.0.206 | 99 | 12/29/2025 | |
| 10.0.205 | 182 | 12/24/2025 | |
| 10.0.204 | 180 | 12/21/2025 | |
| 10.0.203 | 279 | 12/18/2025 | |
| 10.0.202 | 278 | 12/17/2025 | |
| 10.0.200 | 281 | 12/17/2025 | |
| 10.0.199 | 429 | 12/10/2025 | |
| 10.0.197 | 177 | 12/5/2025 | |
| 10.0.196 | 675 | 12/3/2025 | |
| 10.0.195 | 680 | 12/3/2025 | |
| 10.0.194 | 684 | 12/3/2025 | |
| 10.0.193 | 682 | 12/2/2025 | |
| 10.0.192 | 187 | 11/28/2025 | |
| 10.0.190 | 194 | 11/27/2025 | |
| 10.0.189 | 175 | 11/23/2025 | |
| 10.0.187 | 178 | 11/23/2025 | |
| 10.0.186 | 161 | 11/23/2025 | |
| 10.0.184 | 420 | 11/20/2025 | |
| 10.0.181-rc3 | 292 | 11/11/2025 | |
| 10.0.180 | 303 | 11/11/2025 | |
| 10.0.179-rc2 | 295 | 11/11/2025 | |
| 10.0.178-rc2 | 246 | 11/10/2025 | |
| 10.0.177-rc2 | 239 | 11/10/2025 | |
| 10.0.176-rc2 | 207 | 11/6/2025 | |
| 10.0.175-rc2 | 205 | 11/6/2025 | |
| 10.0.174-rc2 | 204 | 11/5/2025 | |
| 10.0.173-rc2 | 192 | 11/3/2025 | |
| 10.0.172-rc2 | 145 | 11/2/2025 | |
| 10.0.170-rc2 | 134 | 11/1/2025 | |
| 10.0.169-rc2 | 127 | 11/1/2025 | |
| 10.0.168-rc2 | 135 | 10/31/2025 | |
| 10.0.166-rc2 | 137 | 10/31/2025 | |
| 10.0.164-rc2 | 206 | 10/28/2025 | |
| 10.0.162-rc2 | 190 | 10/24/2025 | |
| 10.0.161 | 200 | 10/24/2025 | |
| 9.0.151 | 138 | 10/17/2025 | |
| 9.0.150 | 191 | 9/10/2025 | |
| 9.0.146 | 135 | 8/15/2025 | |
| 9.0.145 | 196 | 8/11/2025 | |
| 9.0.144 | 201 | 8/8/2025 | |
| 9.0.137 | 156 | 7/29/2025 | |
| 9.0.136 | 155 | 7/29/2025 | |
| 9.0.135 | 175 | 7/28/2025 | |
| 9.0.134 | 202 | 7/9/2025 | |
| 9.0.133 | 205 | 7/9/2025 | |
| 9.0.132 | 190 | 7/9/2025 | |
| 9.0.131 | 197 | 7/9/2025 | |
| 9.0.130 | 199 | 7/7/2025 |