Azure.AI.Vision.ImageAnalysis
1.0.0
Prefix Reserved
dotnet add package Azure.AI.Vision.ImageAnalysis --version 1.0.0
NuGet\Install-Package Azure.AI.Vision.ImageAnalysis -Version 1.0.0
<PackageReference Include="Azure.AI.Vision.ImageAnalysis" Version="1.0.0" />
paket add Azure.AI.Vision.ImageAnalysis --version 1.0.0
#r "nuget: Azure.AI.Vision.ImageAnalysis, 1.0.0"
// Install Azure.AI.Vision.ImageAnalysis as a Cake Addin #addin nuget:?package=Azure.AI.Vision.ImageAnalysis&version=1.0.0 // Install Azure.AI.Vision.ImageAnalysis as a Cake Tool #tool nuget:?package=Azure.AI.Vision.ImageAnalysis&version=1.0.0
Azure Image Analysis client library for .NET
The Azure.AI.Vision.ImageAnalysis client library provides AI algorithms for processing images and returning information about their content. It enables you to extract one or more visual features from an image simultaneously, including getting a caption for the image, extracting text shown in the image (OCR), and detecting objects. For more information on the service and the supported visual features, see Image Analysis overview, and the Concepts page.
Use the Image Analysis client library to:
- Authenticate against the service
- Select which features you would like to extract
- Upload an image for analysis, or provide an image URL
- Get the analysis result
Product documentation | Samples | Vision Studio | API reference documentation | Package (NuGet) | SDK source code
Getting started
Prerequisites
- An Azure subscription.
- A Computer Vision resource in your Azure subscription.
- You will need the key and endpoint from this resource to authenticate against the service.
- You can use the free pricing tier (
F0
) to try the service, and upgrade later to a paid tier for production. - Note that in order to run Image Analysis with the
Caption
orDense Captions
features, the Azure resource needs to be from a GPU-supported region. See the note here for a list of supported regions.
Install the package
dotnet add package Azure.AI.Vision.ImageAnalysis --prerelease
Authenticate the client
In order to interact with Azure Image Analysis, you'll need to create an instance of the ImageAnalysisClient class. To configure a client for use with Azure Image Analysis, provide a valid endpoint URI to an Azure Computer Vision resource along with a corresponding key credential authorized to use the Azure Computer Vision resource.
using Azure;
using Azure.AI.Vision.ImageAnalysis;
using System;
using System.IO;
string endpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT");
string key = Environment.GetEnvironmentVariable("VISION_KEY");
// Create an Image Analysis client.
ImageAnalysisClient client = new ImageAnalysisClient(new Uri(endpoint), new AzureKeyCredential(key));
Here we are using environment variables to hold the endpoint and key for the Computer Vision Resource.
Create ImageAnalysisClient with a Microsoft Entra ID Credential
Prerequisites for Entra ID Authentication:
- The role
Cognitive Services User
assigned to you. Role assignment can be done via the "Access Control (IAM)" tab of your Computer Vision resource in the Azure portal. - Azure CLI installed.
- You are logged into your Azure account by running
az login
.
Also note that if you have multiple Azure subscriptions, the subscription that contains your Computer Vision resource must be your default subscription. Run az account list --output table
to list all your subscriptions and see which one is the default. Run az account set --subscription "Your Subscription ID or Name"
to change your default subscription.
Client subscription key authentication is used in most of the examples in this getting started guide, but you can also authenticate with Microsoft Entra ID (formerly Azure Active Directory) using the Azure Identity library. To use the DefaultAzureCredential provider shown below, or other credential providers provided with the Azure SDK, please install the Azure.Identity package:
dotnet add package Azure.Identity
string endpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT");
ImageAnalysisClient client = new ImageAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
Key concepts
Once you've initialized an ImageAnalysisClient
, you need to select one or more visual features to analyze. The options are specified by the enum class VisualFeatures
. The following features are supported:
VisualFeatures.Caption
(Examples | Samples): Generate a human-readable sentence that describes the content of an image.VisualFeatures.Read
(Examples | Samples): Also known as Optical Character Recognition (OCR). Extract printed or handwritten text from images.VisualFeatures.DenseCaptions
(Samples): Dense Captions provides more details by generating one-sentence captions for up to 10 different regions in the image, including one for the whole image.VisualFeatures.Tags
(Samples): Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images.VisualFeatures.Objects
(Samples): Object detection. This is similar to tagging, but focused on detecting physical objects in the image and returning their location.VisualFeatures.SmartCrops
(Samples): Used to find a representative sub-region of the image for thumbnail generation, with priority given to include faces.VisualFeatures.People
(Samples): Locate people in the image and return their location.
For more information about these features, see Image Analysis overview, and the Concepts page.
Analyze from image buffer or URL
The ImageAnalysisClient
contains an Analyze
method that has two overloads:
Analyze (BinaryData ...
: Analyze an image from an input BinaryData object. The client will upload the image to the service as part of the REST request.Analyze (Uri ...)
: Analyze an image from a publicly-accessible URL, via theUri
object. The client will send the image URL to the service. The service will download the image.
The examples below demonstrate both. The Analyze
examples populate the input BinaryData object by loading an image from a file from disk.
Supported image formats
Image Analysis works on images that meet the following requirements:
- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
- The file size of the image must be less than 20 megabytes (MB)
- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
Examples
The following sections provide code snippets covering these common Image Analysis scenarios:
- Generate an image caption for an image file
- Generate an image caption for an image URL
- Extract text (OCR) from an image file
- Extract text (OCR) from an image URL
These snippets use the client
from Create and authenticate the client.
See the Samples folder for fully working samples for all visual features, including asynchronous calls.
Generate an image caption for an image file
This example demonstrates how to generate a one-sentence caption for the image file sample.jpg using the ImageAnalysisClient
. The synchronous Analyze
method call returns a CaptionResult
object, which contains the generated caption and its confidence score in the range [0, 1]. By default, the caption may contain gender terms (for example: "man", "woman", "boy", "girl"). You have the option to request gender-neutral terms (for example: "person", "child") by setting genderNeutralCaption = True
when calling Analyze
.
Notes:
- Caption is only available in some Azure regions. See Prerequisites.
- Caption is only supported in English at the moment.
// Use a file stream to pass the image data to the analyze call
using FileStream stream = new FileStream("image-analysis-sample.jpg", FileMode.Open);
// Get a caption for the image.
ImageAnalysisResult result = client.Analyze(
BinaryData.FromStream(stream),
VisualFeatures.Caption,
new ImageAnalysisOptions { GenderNeutralCaption = true });
// Print caption results to the console
Console.WriteLine($"Image analysis results:");
Console.WriteLine($" Caption:");
Console.WriteLine($" '{result.Caption.Text}', Confidence {result.Caption.Confidence:F4}");
Generate an image caption for an image URL
This example is similar to the above, expect it calls the Analyze
method and provides a publicly accessible image URL instead of a file name.
// Get a caption for the image.
ImageAnalysisResult result = client.Analyze(
new Uri("https://aka.ms/azsdk/image-analysis/sample.jpg"),
VisualFeatures.Caption,
new ImageAnalysisOptions { GenderNeutralCaption = true });
// Print caption results to the console
Console.WriteLine($"Image analysis results:");
Console.WriteLine($" Caption:");
Console.WriteLine($" '{result.Caption.Text}', Confidence {result.Caption.Confidence:F4}");
Extract text from an image file
This example demonstrates how to extract printed or hand-written text for the image file sample.jpg using the ImageAnalysisClient
. The synchronous (blocking) Analyze
method call returns a ReadResult
object. This object includes a list of text lines and a bounding polygon surrounding each text line. For each line, it also returns a list of words in the text line and a bounding polygon surrounding each word.
// Load image to analyze into a stream
using FileStream stream = new FileStream("image-analysis-sample.jpg", FileMode.Open);
// Extract text (OCR) from an image stream.
ImageAnalysisResult result = client.Analyze(
BinaryData.FromStream(stream),
VisualFeatures.Read);
// Print text (OCR) analysis results to the console
Console.WriteLine("Image analysis results:");
Console.WriteLine(" Read:");
foreach (DetectedTextBlock block in result.Read.Blocks)
foreach (DetectedTextLine line in block.Lines)
{
Console.WriteLine($" Line: '{line.Text}', Bounding Polygon: [{string.Join(" ", line.BoundingPolygon)}]");
foreach (DetectedTextWord word in line.Words)
{
Console.WriteLine($" Word: '{word.Text}', Confidence {word.Confidence.ToString("#.####")}, Bounding Polygon: [{string.Join(" ", word.BoundingPolygon)}]");
}
}
Extract text from an image URL
This example demonstrates how to extract printed or hand-written text for a publicly accessible image URL.
// Extract text (OCR) from an image stream.
ImageAnalysisResult result = client.Analyze(
new Uri("https://aka.ms/azsdk/image-analysis/sample.jpg"),
VisualFeatures.Read);
// Print text (OCR) analysis results to the console
Console.WriteLine("Image analysis results:");
Console.WriteLine(" Read:");
foreach (DetectedTextBlock block in result.Read.Blocks)
foreach (DetectedTextLine line in block.Lines)
{
Console.WriteLine($" Line: '{line.Text}', Bounding Polygon: [{string.Join(" ", line.BoundingPolygon)}]");
foreach (DetectedTextWord word in line.Words)
{
Console.WriteLine($" Word: '{word.Text}', Confidence {word.Confidence.ToString("#.####")}, Bounding Polygon: [{string.Join(" ", word.BoundingPolygon)}]");
}
}
Troubleshooting
Common errors
When you interact with Image Analysis using the .NET SDK, errors returned by the service correspond to the same HTTP status codes returned for REST API requests. For example, if you try to analyze an image that is not accessible due to a broken URL, a 400
status is returned, indicating a bad request.
Handling exceptions
In the following snippet, the error is handled gracefully by catching the exception and displaying additional information about the error.
var imageUrl = new Uri("https://aka.ms.invalid/azai/vision/image-analysis-sample.jpg");
try
{
var result = client.Analyze(imageUrl, VisualFeatures.Caption);
}
catch (RequestFailedException e)
{
if (e.Status == 400)
{
Console.WriteLine("Error analyzing image.");
Console.WriteLine($"HTTP status code {e.Status}: {e.Message}");
}
else
{
throw;
}
}
You can learn more about how to enable SDK logging here.
Next steps
Beyond the introductory scenarios discussed, the Azure Image Analysis client library offers support for additional scenarios to help take advantage of the full feature set of the Azure Image Analysis service. In order to help explore some of these scenarios, the Image Analysis client library offers a project of samples to serve as an illustration for common scenarios. Please see the samples README for details.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net5.0 was computed. net5.0-windows was computed. net6.0 was computed. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 was computed. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
.NET Core | netcoreapp2.0 was computed. netcoreapp2.1 was computed. netcoreapp2.2 was computed. netcoreapp3.0 was computed. netcoreapp3.1 was computed. |
.NET Standard | netstandard2.0 is compatible. netstandard2.1 was computed. |
.NET Framework | net461 was computed. net462 was computed. net463 was computed. net47 was computed. net471 was computed. net472 was computed. net48 was computed. net481 was computed. |
MonoAndroid | monoandroid was computed. |
MonoMac | monomac was computed. |
MonoTouch | monotouch was computed. |
Tizen | tizen40 was computed. tizen60 was computed. |
Xamarin.iOS | xamarinios was computed. |
Xamarin.Mac | xamarinmac was computed. |
Xamarin.TVOS | xamarintvos was computed. |
Xamarin.WatchOS | xamarinwatchos was computed. |
-
.NETStandard 2.0
- Azure.Core (>= 1.44.1)
- System.ClientModel (>= 1.2.1)
- System.Text.Json (>= 6.0.10)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Azure.AI.Vision.ImageAnalysis:
Repository | Stars |
---|---|
MicrosoftLearning/mslearn-ai-vision
Lab files for Azure AI Vision modules
|
Version | Downloads | Last updated |
---|---|---|
1.0.0 | 2,620 | 10/24/2024 |
1.0.0-beta.3 | 82,950 | 6/14/2024 |
1.0.0-beta.2 | 54,771 | 2/15/2024 |
1.0.0-beta.1 | 48,305 | 1/22/2024 |
0.15.1-beta.1 | 121,796 | 9/6/2023 |
0.13.0-beta.1 | 10,869 | 7/1/2023 |
0.11.1-beta.1 | 8,082 | 5/2/2023 |
0.10.0-beta.1 | 5,926 | 4/3/2023 |
0.9.0-beta.1 | 3,425 | 3/4/2023 |
0.8.1-beta.1 | 439 | 2/4/2023 |
0.8.0-beta.0.33537970 | 119 | 1/17/2023 |
0.8.0-beta.0.33395485 | 148 | 1/10/2023 |