Microsoft Unveils Azure OpenAI Data Zones with 99% SLA for Token Generation
Microsoft Enhances Azure AI Services with New Capabilities
Microsoft’s Azure AI service portfolio continues to thrive, currently serving over 60,000 customers globally. Recently, the tech giant unveiled a range of new features aimed at facilitating the development and scaling of AI solutions, with a special focus on the rapidly expanding Azure OpenAI service.
Introducing Azure OpenAI Data Zones for Enhanced Data Control
One pivotal enhancement is the launch of the Azure OpenAI DataZone deployment option. This feature is crafted to provide enterprises greater oversight regarding data privacy and residency. Organizations operating within the United States and the European Union will now have the capability to process and store their data within designated geographic locations. This development is particularly significant for compliance with regional data residency mandates. Currently, the Data Zone deployment option is available for customers using the Standard (PayGo) tier, with plans to extend this availability to the Provisioned tier shortly.
Improved Performance with New SLA Offerings
To strengthen the reliability of its services, Microsoft has introduced a 99.9% latency Service Level Agreement (SLA) for token generation within the Azure OpenAI service. This guarantee ensures developers can rely on consistent token generation speeds, thereby enhancing their confidence in the overall performance of the service.
Advanced AI Models Now Accessible
Additionally, Microsoft has expanded its AI model offerings for developers. The Azure AI service now features three advanced multimodal medical imaging models:
- MedImageInsight: For comprehensive image analysis
- MedImageParse: Facilitates image segmentation across various imaging methods
- CXRReportGen: Generates in-depth structured reports
Moreover, developers can now access Cohere Embed 3, a leading AI search model, alongside Mistral AI’s small language model, Ministral 3B. The option for fine-tuning is now available for both the Phi-3.5-mini and Phi-3.5-MoE models, further enhancing the adaptability of the Azure AI service.
Seamless Integration via GitHub Marketplace
To facilitate exploration and experimentation, Azure AI models are accessible through the GitHub Marketplace via the Azure AI model inference API. This integration provides developers with a playground environment to evaluate and compare model performance at no cost. Once ready to deploy a model, developers can simply log into their Azure accounts and upgrade to a paid tier.
What’s Next for Azure AI?
Microsoft has plans to unveil additional Azure AI features during the upcoming Microsoft Ignite 2024 event later this month, which is anticipated to showcase further advancements in their AI offerings.
Stay tuned for more updates as Microsoft continues to enhance its Azure AI portfolio.
Leave a Reply