LM-Kit.NET
2024.12.5
See the version list below for details.
dotnet add package LM-Kit.NET --version 2024.12.5
NuGet\Install-Package LM-Kit.NET -Version 2024.12.5
<PackageReference Include="LM-Kit.NET" Version="2024.12.5" />
paket add LM-Kit.NET --version 2024.12.5
#r "nuget: LM-Kit.NET, 2024.12.5"
// Install LM-Kit.NET as a Cake Addin #addin nuget:?package=LM-Kit.NET&version=2024.12.5 // Install LM-Kit.NET as a Cake Tool #tool nuget:?package=LM-Kit.NET&version=2024.12.5
Enterprise-Grade .NET SDK for Integrating Generative AI Capabilities
Build Smarter Apps with Language Models
LM-Kit.NET integrates cutting-edge Generative AI into C# and VB.NET applications through on-device LLM inference, ensuring rapid, secure, and private AI performance without the need for cloud services. Key features include AI chatbot development, natural language processing (NLP), retrieval-augmented generation (RAG), structured data extraction, text improvement, translation, and more.
<br/>
Wide Range of Capabilities
LM-Kit.NET offers a suite of highly optimized low-level APIs designed to facilitate the development of fully customized Large Language Model (LLM) inference pipelines for C# and VB.NET developers.
Additionally, LM-Kit.NET provides an extensive array of high-level AI functionalities across multiple domains, grouped into the following categories:
Data Processing
- ποΈ Structured Data Extraction: Accurately extract and structure data from any source using customizable extraction schemes.
- π Retrieval-Augmented Generation (RAG): Enhance text generation with information retrieved from a large corpus.
Text Analysis
- π Emotion and Sentiment Analysis: Detect and interpret the emotional tone from text.
- π·οΈ Custom Text Classification: Categorize text into predefined classes based on content.
- π Keyword Extraction: Identify the most relevant keywords or phrases within large volumes of text.
- π’ Text Embeddings: Transform text into numerical representations that capture semantic meanings.
AI Agents Orchestration
- π¬ Chatbot & Conversational AI: Develop AI chatbots capable of engaging in natural and context-aware conversations.
- β Question Answering: Provide answers to queries, supporting both single-turn and multi-turn interactions.
- π Function Calling: Dynamically invoke specific functions within your application.
Language Services
- π Language Detection: Identify the language of text input with high accuracy.
- π Translation: Seamlessly convert text between multiple languages.
Text Generation
- π Structured Content Creation: Generate content that follows a predefined structure using JSON schemas, templates, or grammar rules.
- π Content Summarization: Condense long pieces of text into concise summaries.
- βοΈ Grammar & Spell Check: Correct grammar and spelling in text of any length.
- π Text Enhancement: Rewrite text to improve clarity, style, or adapt to a specific communication tone.
Model Customization and Optimization
- π οΈ Model Fine-Tuning: Customize pre-trained models to better suit specific needs.
- βοΈ Model Quantization: Optimize models for efficient inference.
- π LoRA Adapter Support: Merge Low-Rank Adaptation (LoRA) transformations into base models for efficient fine-tuning.
And More
- π Additional Features: Explore other functionalities that extend your application's capabilities.
These ever-expanding functionalities ensure seamless integration of advanced AI solutions, tailored to meet diverse needs through a single Software Development Kit (SDK) for C# and VB.NET application development.
<br/>
Run Local LLMs on Any Device
The LM-Kit.NET model inference system is built to deliver high performance across a wide variety of hardware with minimal setup and no external dependencies. LM-Kit.NET runs inference entirely on-device, also known as edge computing, providing users with full control and precise tuning of the inference process. Moreover, LM-Kit.NET supports an ever-growing list of model architectures, including LLaMA-2, LLaMA-3, Mistral, Falcon, Phi, and others.
<br/>
Highest Degree of Performance
1. π Optimized for Various GPUs and CPUs
LM-Kit.NET is expertly engineered to maximize the capabilities of a wide range of hardware configurations, ensuring top-tier performance across all platforms. This multi-platform optimization allows LM-Kit.NET to specifically leverage the unique hardware strengths of each device. For instance, it automatically utilizes CUDA on NVIDIA GPUs to significantly boost computation speeds, Metal on Apple devices to enhance both graphics and processing tasks, and Vulkan to efficiently harness the power of multiple GPUsβincluding those from AMD, Intel, and NVIDIAβacross diverse environments.
2. βοΈ State-of-the-Art Architectural Foundations
At the core of LM-Kit.NET lies llama.cpp, which serves as the native inference framework. This powerful engine has been rigorously optimized to handle a wide array of scenarios efficiently. Its advanced internal caching and recycling mechanisms are designed to maintain high performance levels consistently, even under varied operational conditions. Whether your application is running a single instance or multiple concurrent instances, LM-Kit.NET's sophisticated core system orchestrates all requests smoothly, delivering rapid performance while minimizing resource consumption.
3. π Unrivaled Performance
Experience model inference speeds up to 5Γ faster with LM-Kit.NET, thanks to its cutting-edge underlying technologies that are continuously refined and benchmarked to ensure you stay ahead of the curve.
<br/>
Be an Early Adopter of the Latest and Future Generative AI Innovations
LM-Kit.NET is crafted by industry experts employing a strategy of continuous innovation. It is designed to rapidly address emerging market needs and introduce new capabilities to modernize existing applications. Leveraging state-of-the-art AI technologies, LM-Kit.NET offers a modern, user-friendly, and intuitive API suite, making advanced AI accessible for any type of application in C# and VB.NET.
<br/>
Maintain Full Control Over Your Data
Maintaining full control over your data is crucial for both privacy and security. By using LM-Kit.NET, which performs model inference directly on-device, you ensure that your sensitive data remains within your controlled environment and does not traverse external networks. Here are some key benefits of this approach:
1. π Enhanced Privacy
All data processing is done locally on your device, eliminating the need to send data to a remote server. This drastically reduces the risk of exposure or leakage of sensitive information, keeping your data confidential.
2. π‘οΈ Increased Security
With zero external requests, the risk of data interception during transmission is completely eliminated. This closed-system approach minimizes vulnerabilities that are often exploited in data breaches, offering a more secure solution.
3. β‘ Faster Response Times
Processing data locally reduces the latency typically associated with sending data to a remote server and waiting for a response. This results in quicker model inferences, leading to faster decision-making and improved user experience.
4. π Reduced Bandwidth Usage
By avoiding the need to transfer large volumes of data over the internet, LM-Kit.NET minimizes bandwidth consumption. This is particularly beneficial in environments with limited or costly data connectivity.
5. β Full Compliance with Data Regulations
Local processing helps in complying with strict data protection regulations, such as GDPR or HIPAA, which often require certain types of data to be stored and processed within specific geographical boundaries or environments. By leveraging LM-Kit.NET's on-device processing capabilities, organizations can achieve higher levels of data autonomy and protection while still benefiting from advanced computational models and real-time analytics.
<br/>
Seamless Integration and Simple Deployment
LM-Kit.NET offers an exceptionally streamlined deployment model, packaged as a single NuGet package for all supported platforms. Integrating LM-Kit.NET into any C# or VB.NET application is a straightforward process, typically requiring just a few clicks. LM-Kit.NET combines C# and C++ code, meticulously crafted without external dependencies to perfectly suit its functionalities.
1. π§ Simplified Integration
LM-Kit.NET requires no external containers or complex deployment procedures, making the integration process exceptionally straightforward. This approach significantly reduces development time and lowers the learning curve, enabling a broader range of developers to effectively deploy and leverage the technology.
2. π Streamlined Deployment
Designed for efficiency and simplicity, LM-Kit.NET runs directly within the same application process that calls it by default, avoiding the complexities and resource demands typically associated with containerized systems. This direct integration accelerates performance and simplifies incorporation into existing applications by removing common hurdles associated with container use.
3. βοΈ Efficient Resource Management
Operating in-process, LM-Kit.NET minimizes its impact on system resources, making it ideal for devices with limited capacity or situations where maximizing computing efficiency is essential.
4. π Enhanced Reliability
By avoiding reliance on external services or containers, LM-Kit.NET offers more stable and predictable performance. This reliability is vital for applications that demand consistent, rapid data processing without external dependencies.
<br/>
Supported Operating Systems
LM-Kit.NET is designed for full compatibility with a wide range of operating systems, ensuring smooth and reliable performance on all supported platforms:
- πͺ Windows: Compatible with versions from Windows 7 through to the latest release.
- π macOS: Supports macOS 11 and all subsequent versions.
- π§ Linux: Functions optimally on distributions with glibc version 2.27 or newer.
<br/>
Supported .NET Frameworks
LM-Kit.NET is compatible with a wide range of .NET frameworks, spanning from version 4.6.2 up to .NET 9. To maximize performance through specific optimizations, separate binaries are provided for each supported framework version.
<br/>
Hugging Face Integration
The LM-Kit section on Hugging Face provides state-of-the-art quantized models that have been rigorously tested with the LM-Kit SDK. Moreover, LM-Kit enables you to seamlessly load models directly from Hugging Face repositories via the Hugging Face API, simplifying the integration and deployment of the latest models into your C# and VB.NET applications.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 is compatible. net5.0-windows was computed. net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 is compatible. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 is compatible. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETCoreApp 2.1
- No dependencies.
-
.NETCoreApp 3.1
- No dependencies.
-
.NETStandard 2.0
- Microsoft.Bcl.AsyncInterfaces (>= 8.0.0)
- System.Buffers (>= 4.5.1)
- System.Linq.Async (>= 6.0.1)
- System.Memory (>= 4.5.5)
- System.Numerics.Vectors (>= 4.5.0)
- System.Runtime.CompilerServices.Unsafe (>= 6.0.0)
- System.Text.Encodings.Web (>= 8.0.0)
- System.Text.Json (>= 8.0.5)
- System.Threading.Tasks.Extensions (>= 4.5.4)
- System.ValueTuple (>= 4.3.0)
-
net5.0
- No dependencies.
-
net6.0
- No dependencies.
-
net7.0
- No dependencies.
-
net8.0
- No dependencies.
-
net9.0
- No dependencies.
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.
Version | Downloads | Last updated |
---|---|---|
2024.12.12 | 92 | 12/26/2024 |
2024.12.11 | 316 | 12/23/2024 |
2024.12.10 | 334 | 12/22/2024 |
2024.12.9 | 231 | 12/20/2024 |
2024.12.8 | 160 | 12/19/2024 |
2024.12.7 | 385 | 12/15/2024 |
2024.12.6 | 323 | 12/13/2024 |
2024.12.5 | 290 | 12/11/2024 |
2024.12.4 | 314 | 12/10/2024 |
2024.12.3 | 384 | 12/7/2024 |
2024.12.2 | 209 | 12/7/2024 |
2024.12.1 | 171 | 12/6/2024 |
2024.11.10 | 306 | 11/29/2024 |
2024.11.9 | 194 | 11/27/2024 |
2024.11.8 | 210 | 11/25/2024 |
2024.11.7 | 159 | 11/25/2024 |
2024.11.6 | 203 | 11/25/2024 |
2024.11.5 | 332 | 11/23/2024 |
2024.11.4 | 269 | 11/18/2024 |
2024.11.3 | 165 | 11/12/2024 |
2024.11.2 | 139 | 11/5/2024 |
2024.11.1 | 147 | 11/4/2024 |
2024.10.5 | 182 | 10/24/2024 |
2024.10.4 | 217 | 10/17/2024 |
2024.10.3 | 141 | 10/16/2024 |
2024.10.2 | 172 | 10/9/2024 |
2024.10.1 | 179 | 10/1/2024 |
2024.9.4 | 153 | 9/25/2024 |
2024.9.3 | 191 | 9/18/2024 |
2024.9.2 | 175 | 9/11/2024 |
2024.9.1 | 171 | 9/6/2024 |
2024.9.0 | 160 | 9/3/2024 |
2024.8.4 | 171 | 8/26/2024 |
2024.8.3 | 195 | 8/21/2024 |
2024.8.2 | 159 | 8/20/2024 |
2024.8.1 | 176 | 8/15/2024 |
2024.8.0 | 130 | 8/11/2024 |
2024.7.10 | 128 | 8/6/2024 |
2024.7.9 | 102 | 7/31/2024 |
2024.7.8 | 86 | 7/30/2024 |
2024.7.7 | 100 | 7/29/2024 |
2024.7.6 | 106 | 7/27/2024 |
2024.7.5 | 135 | 7/26/2024 |