-->

02/04/2026

Authenticating Azure Foundry OpenAI Using Managed Identity

 In earlier post we have tested 30+ features of vaious Azure foundry Open AI services.

Post: https://pratapreddypilaka.blogspot.com/2026/04/azure-ai-services-complete-guide-to.html

Most of the code have saved all the Open AI service keys in a environment file following python best parctice. But this is done only for prcatice purpose, just to get familiar with Open AI services and how to access those services using python code.

Saving the Access keys in code is a worst possible security blunder you can do.  Access keys will completly bypass all the RBAC controls and give complete access to anyone. 


What are the ways to authenticate Azure Open AI services with out exposing the keys?

  • System Assigned Managed Identity — Auto-created identity tied to an Azure resource (VM, App Service, Functions)
  • User Assigned Managed Identity — Standalone identity you create and attach to one or more Azure resources
  • Service Principal + Client Secret — App registration with a secret, works anywhere including on-prem
  • Service Principal + Certificate — Same as above but uses a certificate instead of secret, more secure
  • Azure CLI Credential — Uses the logged-in az login identity, ideal for local dev/testing
  • Interactive Browser Credential — Pops a browser login window, good for desktop tools
  • Device Code Credential — Prints a code to enter at a URL, useful for headless servers
  • DefaultAzureCredential — Tries multiple methods automatically in order, recommended for most cases
  • Workload Identity — Federated keyless auth for pods running in Azure Kubernetes Service (AKS)
  • Federated Identity Credential — Allows external IdPs like GitHub Actions or GitLab to authenticate without any secrets

  • Azure AI Services - A Complete Guide to Building Intelligent Applications with Azure AI Foundry

    A while ago i started exploring Azure AI Foundry and ended up going down a rabbit hole of 30+ implementations covering everything from GPT-5 chat to live speech transcription. In this post i will walk you through all the major Azure AI services, what they do, how to implement them, and when to use them — so you don't have to figure it all out the hard way like i did.

    You can download git repo and start embedding your Azure OpenAI Service keys in.env file and start executing them as we go along.

    Our objective is to understand the complete Azure AI Services ecosystem and how you can combine them to build enterprise-grade intelligent applications.


    Azure OpenAI - GPT-5 Chat, Vision and Code

    This is where most people start, and for good reason. Azure OpenAI gives you access to GPT-5 with enterprise-grade security, regional deployment and SLAs — unlike calling OpenAI directly.

    The basic setup is straightforward. You initialize an AzureOpenAI client with your endpoint and API key, define a system role (something like "you are a helpful travel assistant"), pass in user messages and configure temperature and top_p for response behavior. That's it, you are doing conversational AI.

    from openai import AzureOpenAI
    from azure.core.credentials import AzureKeyCredential
    from dotenv import load_dotenv
    import os
    
    load_dotenv()
    client = AzureOpenAI(
        azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
        api_key=os.getenv("AZURE_OPENAI_KEY"),
        api_version="2024-12-01-preview"
    )
    

    What makes it more interesting is Vision. You can encode an image to base64, pass it as image_url in the message content, and GPT-5 will analyze and explain it — diagrams, screenshots, anything. I used this for code explanation too. Point it at a source file with a "you are a teacher" system prompt and let it stream the explanation back. Really useful for documentation generation and code reviews.

    Chat Output:


    Image Reading Output: