[Python] Add agent-framework-azure-ai-contentunderstanding package#4829
Open
yungshinlintw wants to merge 72 commits intomicrosoft:mainfrom
Open
[Python] Add agent-framework-azure-ai-contentunderstanding package#4829yungshinlintw wants to merge 72 commits intomicrosoft:mainfrom
yungshinlintw wants to merge 72 commits intomicrosoft:mainfrom
Conversation
Add Azure Content Understanding integration as a context provider for the Agent Framework. The package automatically analyzes file attachments (documents, images, audio, video) using Azure CU and injects structured results (markdown, fields) into the LLM context. Key features: - Multi-document session state with status tracking (pending/ready/failed) - Configurable timeout with async background fallback for large files - Output filtering via AnalysisSection enum - Auto-registered list_documents() and get_analyzed_document() tools - Supports all CU modalities: documents, images, audio, video - Content limits enforcement (pages, file size, duration) - Binary stripping of supported files from input messages Public API: - ContentUnderstandingContextProvider (main class) - AnalysisSection (output section selector enum) - ContentLimits (configurable limits dataclass) Tests: 46 unit tests, 91% coverage, all linting and type checks pass.
- Replace synthetic fixtures with real CU API responses (sanitized) - Update test assertions to match real data (Contoso vs CONTOSO, TotalAmount vs InvoiceTotal, field values from real analysis) - Add --pre install note in README (preview package) - Document unenforced ContentLimits fields (max_pages, duration)
Align naming with Azure SDK convention and AF pattern: - Directory: azure-contentunderstanding -> azure-ai-contentunderstanding - PyPI: agent-framework-azure-contentunderstanding -> agent-framework-azure-ai-contentunderstanding - Module: agent_framework_azure_contentunderstanding -> agent_framework_azure_ai_contentunderstanding CI fixes: - Inline conftest helpers to avoid cross-package import collision in xdist - Remove PyPI badge and dead API reference link from README (package not published yet)
Member
Python Test Coverage Report •
Python Unit Test Overview
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
- document_qa.py: Single PDF upload, CU context provider, follow-up Q&A - invoice_processing.py: Structured field extraction with prebuilt-invoice - multimodal_chat.py: Multi-file session with status tracking - Add ruff per-file-ignores for samples/ directory - Update README with samples section, env vars, and run instructions
…earch) - S3: devui_multimodal_agent/ — DevUI web UI with CU-powered file analysis - S4: large_doc_file_search.py — CU extraction + OpenAI vector store RAG - Update README and samples/README.md with all 5 samples
Add FileSearchConfig — when provided, CU-extracted markdown is automatically uploaded to an OpenAI vector store and a file_search tool is registered on the context. This enables token-efficient RAG retrieval for large documents without users needing to manage vector stores manually. - FileSearchConfig dataclass (openai_client, vector_store_name) - Auto-create vector store, upload markdown, register file_search tool - Auto-cleanup on close() - When file_search is enabled, skip full content injection (use RAG instead) - Update large_doc_file_search sample to use the integration - 4 new tests (50 total, 90% coverage)
Follow established AF pattern: check for API key env var first, fall back to AzureCliCredential. Supports AZURE_OPENAI_API_KEY and AZURE_CONTENTUNDERSTANDING_API_KEY environment variables.
…zy init _context_provider.py: - Make analyzer_id optional (default None) with auto-detection by media type prefix: audio->audioSearch, video->videoSearch, else documentSearch - Add _ensure_initialized() for lazy client creation in before_run() - Add FileSearchConfig-based vector store upload - Fix: background-completed docs in file_search mode now upload to vector store instead of injecting full markdown into context messages - Add _pending_uploads queue for deferred vector store uploads devui_file_search_agent/ (new sample): - DevUI agent combining CU extraction + OpenAI file_search RAG azure_responses_agent (existing sample fix): - Add AzureCliCredential support and AZURE_AI_PROJECT_ENDPOINT fallback Tests (19 new), Docs updated (AGENTS.md, README.md)
…tor store expiration - Add three-layer MIME detection (fast path → filetype binary sniff → filename fallback) to handle unreliable upstream MIME types (e.g. mp4 sent as application/octet-stream). Adds filetype>=1.2,<2 dependency. - Media-aware output formatting: video shows duration/resolution + all fields as JSON; audio promotes Summary as prose; document unchanged. - Unified timeout for all media types (removed file_search special-case that waited indefinitely for video/audio). All files use max_wait with background polling fallback. - Vector store created with expires_after=1 day as crash safety net. - Add 8 MIME sniffing tests (TestMimeSniffing class).
CU's prebuilt-videoSearch and prebuilt-audioSearch analyzers split long media files into multiple `contents[]` segments. Previously, `_extract_sections()` only read `contents[0]`, causing truncated duration, missing transcript, and incomplete fields for any video/audio longer than a single scene. Now iterates all segments and merges: - duration: global min(startTimeMs) → max(endTimeMs) - markdown: concatenated with `---` separators - fields: same-named fields collected into per-segment list - metadata (kind, resolution): taken from first segment Single-segment results (documents, short audio) are unaffected. Update test fixture to realistic 3-segment video structure and expand assertions to verify multi-segment merging. Add documentation for multi-segment processing and speaker diarization limitation.
- Improve class docstring: clarify endpoint (Azure AI Foundry URL with example), credential (AzureKeyCredential vs Entra ID), and analyzer_id (prebuilt/custom with auto-selection behavior and reference links) - Add SUPPORTED_MEDIA_TYPES comments explaining MIME-based matching behavior and add missing file types per CU service docs - Use namespaced logger to align with other packages - Remove ContentLimits and related code/tests - Rename DEFAULT_MAX_WAIT to DEFAULT_MAX_WAIT_SECONDS for clarity
- Add vector_store_id field to FileSearchConfig (None = auto-create) - Track _owns_vector_store to only delete auto-created stores on close() - Remove vector_store_name; use internal _DEFAULT_VECTOR_STORE_NAME - Add inline comments for private state fields - Document output_sections default in docstring - Update AGENTS.md, samples, and tests
Resolve conflict in azure_responses_agent/agent.py by taking upstream (AzureOpenAIResponsesClient -> FoundryChatClient rename)
Follow Azure AI Search provider pattern: create the client eagerly in __init__, make __aenter__ a no-op. This ensures __aexit__/close() is always safe to call and eliminates the _ensure_initialized() workaround.
Replace direct OpenAI client usage with FileSearchBackend ABC: - OpenAIFileSearchBackend: for OpenAIChatClient (Responses API) - FoundryFileSearchBackend: for FoundryChatClient (Azure Foundry) - Shared base _OpenAICompatBackend for common vector store CRUD FileSearchConfig now takes a backend instead of openai_client. Factory methods from_openai() and from_foundry() for convenience. BREAKING: FileSearchConfig(openai_client=...) -> FileSearchConfig.from_openai(...)
- Poll vector store indexing (create_and_poll) to ensure file_search returns results immediately after upload - Set status to failed when vector store upload fails - Skip get_analyzed_document tool in file_search mode to prevent LLM from bypassing RAG - Simplify sample auth: single credential, direct parameters - Use from_foundry backend for Foundry project endpoints
- Add module-level docstrings to __init__.py and _context_provider.py - Use Self return type for __aenter__ (with typing_extensions fallback) - Use explicit typed params for __aexit__ signature - Add sync TokenCredential to AzureCredentialTypes union - Pass AGENT_FRAMEWORK_USER_AGENT to ContentUnderstandingClient - Remove unused ContentLimits from public API and tests - Fix FileSearchConfig tests to match refactored backend API - Fix lifecycle tests to match eager client initialization
- Refactor _analyze_file to return DocumentEntry instead of mutating dict - Remove TokenCredential from AzureCredentialTypes (fixes mypy/pyright CI) - Remove OpenAIFileSearchBackend/FoundryFileSearchBackend from public API (internal to FileSearchConfig factory methods) - Remove DocumentStatus from public exports (implementation detail) - Update file_search comments to reflect backend-agnostic design - Add DocumentStatus enum, analysis/upload duration tracking - Add combined timeout for CU analysis + vector store upload
.../azure-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_file_search.py
Outdated
Show resolved
Hide resolved
python/packages/azure-ai-contentunderstanding/samples/02-devui/01-multimodal_agent/README.md
Outdated
Show resolved
Hide resolved
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Show resolved
Hide resolved
…led tasks - _file_search.py: Remove unused logger and logging import - 01-multimodal_agent/README.md: Remove accidentally pasted Python script - _context_provider.py close(): Await cancelled tasks before closing client to prevent 'Task destroyed but pending' warnings
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Show resolved
Hide resolved
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
- Add _sanitize_doc_key() to strip control characters, collapse whitespace, and cap length at 255 chars — prevents prompt injection via crafted filenames in extend_instructions() calls. - Track accepted doc_keys in step 3 so step 5 only injects content for files actually analyzed this turn, not pre-existing duplicates. - Soften duplicate upload instruction wording (remove IMPORTANT/caps).
Previously _pending_tasks, _pending_uploads, and _uploaded_file_ids were stored on self, shared across all sessions. This caused cross-session leakage: Session A's background task results could be injected into Session B's context. Now these are stored in the per-session state dict. Global copies (_all_pending_tasks, _all_uploaded_file_ids) are kept on self only for best-effort cleanup in close(). Add 2 new TestSessionIsolation tests verifying that background tasks and resolved content stay within their originating session.
Only MARKDOWN and FIELDS are handled by _extract_sections(). Remove FIELD_GROUNDING, TABLES, PARAGRAPHS, SECTIONS to avoid exposing dead options to users.
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
yungshinlintw
commented
Mar 27, 2026
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
...e-ai-contentunderstanding/agent_framework_azure_ai_contentunderstanding/_context_provider.py
Outdated
Show resolved
Hide resolved
- Use SDK .value property with recursive extraction for object/array fields
- Object: AmountDue -> {Amount: 610, CurrencyCode: USD} (was raw SDK dict)
- Array: LineItems -> list of flattened items (was raw SDK list)
- Update invoice fixture with object/array fields from prebuilt-invoice
- Add 3 unit tests for object, array, and nested object field extraction
- Use SDK AnalysisInput model instead of raw body dict for begin_analyze - Forward content_range from additional_properties to CU (page/time ranges) - Extract CU warnings with code/message/target (ODataV4Format) into output - Include content-level category from classifier analyzers - Add 5 new tests: warnings, category, content_range forwarding - Fix pyright with explicit casts; fix en-dash lint (RUF002)
5677c0b to
9f31124
Compare
- Fix start_time_ms=0 treated as falsy by 'or' short-circuit, use 'is None' checks instead for duration and segment time extraction - Update warnings test to use RAI ContentFiltered codes - Enrich warnings extraction to include code/message/target (ODataV4Format) - Add multi-segment video category test with per-segment assertions
- Extract _constants.py: SUPPORTED_MEDIA_TYPES, MIME_ALIASES, analyzer maps - Extract _detection.py: file detection, MIME sniffing, doc key derivation - Extract _extraction.py: result extraction, field flattening, LLM formatting - _context_provider.py delegates via thin wrappers (793 lines, was 1255) - Update test imports to use _constants.py for SUPPORTED_MEDIA_TYPES
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Reviewer's Guide
Closes #4942
This package adds a
BaseContextProviderimplementation that bridges Azure Content Understanding (CU) with the Agent Framework. When a user sends file attachments (PDF, images, audio, video), the provider intercepts them inbefore_run(), sends them to CU for analysis, and injects the structured results (markdown + extracted fields) back into the LLM context — so the agent can answer questions about the files without the developer writing any extraction code.Quick usage:
Suggested review order
1. Start with samples — they show the feature set and usage patterns end-to-end:
01_document_qa.pyContent.from_uri(),context_providers=[cu], and how CU results appear in the agent's response.02_multi_turn_session.pyAgentSessionpersistence — upload a file on turn 1, ask follow-up questions on turns 2–3 without re-uploading. Shows howstate["documents"]carries across turns.03_multimodal_chat.py04_invoice_processing.pyadditional_properties={"analyzer_id": "prebuilt-invoice"}to extract structured invoice fields (vendor, total, line items) instead of generic markdown.05_background_analysis.pymax_wait=0.5— file starts analyzing in the background while the agent responds immediately. Next turn resolves the pending result. Shows theanalyzing→readystatus flow.06_large_doc_file_search.pyfile_searchtool instead of injecting full content into context.2. Then review the core implementation:
_context_provider.py(1087 lines)before_run()hook, file detection/stripping, CU analysis with timeout + background fallback, output formatting, tool registration. Most important file to review._models.pyDocumentEntry,DocumentStatus,AnalysisSection,FileSearchConfigTypedDicts and enums exposed to users_file_search.pyFileSearchBackendprotocol + OpenAI/Foundry factory methods for vector store integration__init__.pypyproject.tomltests/MAF API usage (needs team alignment)
This package uses the following internal/private MAF APIs — if any of these are changing or not intended for external use, this package may need updates:
BaseContextProviderand itsbefore_run()hookSessionContext.extend_instructions(),extend_messages(),extend_tools()Content.from_data(),Content.from_uri(),Content.type,Content.media_type,Content.additional_propertiesFunctionToolfor registeringlist_documents()agent_framework._sessions.AgentSessionagent_framework._settings.load_settings()This PR adds
agent-framework-azure-ai-contentunderstanding, an optional connector package that integrates Azure Content Understanding (CU) into the Agent Framework as a context provider.What's Included
Core (
_context_provider.py,_models.py,_file_search.py)ContentUnderstandingContextProvider-- auto-analyzes file attachments (PDF, images, audio, video) via Azure CU and injects structured results (markdown, fields) into LLM contextprebuilt-documentSearch,prebuilt-audioSearch,prebuilt-videoSearch)analyzing/uploading/ready/failed)max_wait) with async background fallbackAnalysisSectionenumlist_documents()tool for status queriesapplication/octet-stream)Content.additional_properties["analyzer_id"]-- mix different analyzers in the same turn (e.g.,prebuilt-invoicefor invoices alongsideprebuilt-documentSearchfor general docs)FileSearchConfigfor vector store integration (OpenAI/Foundry backends)Samples (6 scripts + 3 DevUI)
01_document_qa.py-- Single PDF upload + Q&A02_multi_turn_session.py-- AgentSession persistence across turns03_multimodal_chat.py-- PDF + audio + video parallel analysis (5 turns)04_invoice_processing.py-- Structured field extraction with prebuilt-invoice05_background_analysis.py-- Non-blocking analysis with max_wait + status tracking06_large_doc_file_search.py-- CU extraction + vector store RAG02-devui/01-multimodal_agent-- Interactive web UI for uploading and chatting with documents/audio/video02-devui/02-file_search_agent/azure_openai_backend-- DevUI with CU + Azure OpenAI file_search RAG02-devui/02-file_search_agent/foundry_backend-- DevUI with CU + Foundry file_search RAGTests