Azure World Newsletter – Issue 4.05

March 8, 2023

Welcome to the fifth edition of the Azure World Newsletter in 2023.

Hello again, my friends from around the world. I’m so happy you continue to subscribe and read this bi-weekly newsletter on Azure. I enjoy sitting down each week to research and write this, and hopefully, you will continue to find value in it. Feel free to invite your co-workers or others to subscribe if you think they would find it helpful.

The unsubscribe link is at the bottom if you want to stop receiving these emails.


As I sat down to write this week’s newsletter, I did not know today (March 7) was Azure Open Source Day.

The open-source community remains strong worldwide, and Microsoft participates in that community. Whereas twenty years ago, their motto was “embrace and extend”, it seems in the modern Satya Nadela era, Microsoft is more likely to contribute back to the open source community and support those products running in their environments.

I haven’t seen someone spelling the word Microsoft with a dollar sign for the S in more than 5 years, which is a good sign.

One example of this is the Hugging Face project. Hugging Face is an organization that develops and promotes open source in the world of AI. Instead of being tied into proprietary AI tools from vendors like Microsoft or Google, users can host their models and datasets on Hugging Face for others to use freely.

Microsoft is integrating with the open-source Hugging Face models in their own Azure Machine Learning. So, users can leverage the data and models of others for free, with the support of Microsoft in their Azure Machine Learning environment.

In fact, you can use these pre-trained models for natural language processing, vision, and the traditional ML uses instead of the Microsoft versions.

Another of the exciting announcements that I could not find out much about was a new way of tracking lost pets.

Today, we are excited to be showcasing a brand-new, intelligent, cloud-native application that connects owners with their lost pets using fine-tuned machine learning. Instead of printing posters, use an advanced machine learning image classification model, fine-tuned by the images on your camera roll. With this trained machine learning model, when a pet is found, you can instantly snap a photo that will match the model and connect you to the owner.

As a pet owner, I’d love to know more about this app that makes it easy to identify who owns the lost pet you found. Sadly, Microsoft did not provide the name of the app nor a link within their blog post.

More info:

And also:


Microsoft has announced a new ML model for Cognitive Services, which is both multimodal and unified. The model is nicknamed Florence, and it has been trained with billions of images and videos. Florence is apparently a whole new approach to ML and is called a “complete rethinking”.

Multimodal generally means an AI can work across different content types like video, audio, images, and text. And calling it unified implies that, instead of working with different machine learning models for every type of content you want to work with, this one ML model can handle all of it.

I can see the reason why Microsoft wants a model like this. Right now, if you want to transcribe sound to text, there is one API. And then, you want to translate that to a different language, and there is another API. And if you want to use that to understand the user intent and reply helpfully (like a chatbot), that’s a third API. A unified ML model should be able to do all these things.

Of course, that sounds incredibly difficult, which is why it’s rare.

Single-purpose ML models have been trained to be excellent at a single task. And so far, that’s what we’ve seen in Azure AI Services and other places.

It’s been a couple of years since Microsoft first mentioned Florence. And now, in 2023, the year of AI, they are starting to roll it out into several of their products.

For instance, Florence is finally released as part of Vision Services in Azure’s Cognitive Services API.

The new model is more powerful than recognizing the individual letters on an image like a traditional OCR. Florence can recognize the objects in a video as well. You could, theoretically, use it to search a video for a particular frame or the appearance of an object, such as a bike or a car.

The main uses for this vision model appear to be helping accessibility for those who have trouble reading text embedded in an image, SEO, and content moderation.

Reddit, for instance, will use such a model to create captions for hundreds of millions of images on their platform. And at least 40% of LinkedIn’s posts contain an image, and this will help users of that platform as well.

With all this Generative AI and ChatGPT talk, it’s good to see that Microsoft is continuing to improve the essential ML services.

See more:

And TechCrunch has a pretty lengthy write-up on the technology:


The following updates to the Azure platform were announced in the last two weeks:

  • Azure Data Explorer Dashboards in GA
  • Azure Monitor Query client module for Go
  • Azure Percept DK will be retired on March 30, 2023
  • Azure Managed Lustre for accelerated HPC and AI workloads, in preview
  • Create disks from CMK-encrypted snapshots across subscriptions and in the same tenant
  • Customer Initiated Storage Account Conversion allows you to go from non-zonal storage to zonal redundancy via Azure Portal
  • Caching in ACR, in preview
  • Pod sandboxing in AKS, in preview
  • Online live resize of persistent volumes in AKS
  • Confidential containers on ACI, in preview
  • Azure Network Watcher new enhanced connection troubleshoot
  • Azure Monitor Ingestion client libraries for .NET, Java, JavaScript, and Python
  • Azure Virtual Network Manager Event Logging is now in public preview
  • Model Serving on Azure Databricks

Be sure and check out the Azure Updates page if any of these affect you.


My ChatGPT course launched on Udemy. I have some work to do to add more content and deliver more value to students. So I’m “heads down” this week working on that.

I’m also continuing to revise the AZ-900 course to give it a new visual look throughout. Luckily, there have not been many revisions in the Azure exams for a few months. I probably just jinxed myself by saying that.


And that’s it for issue 4.05. Thanks for reading this far. Talk to you again in two weeks.

What is your favorite platform to be on? Perhaps we can connect there.

Facebook Page: 





LinkedIn Learning: