I have a Machine Learning (ML) application running on my Laptop and I was looking for ways to improve my application accessibility and performance. My primary investigation was focused on migrating my ML application to the cloud. This blog describes my cloud migration journey, where I’ll highlight some of the main lessons I learned through the ML migration process. I was hoping that by migrating my ML application to the cloud, I’d have expanded performance upgrade options that couldn’t be duplicated with my local machine implementation.
The core of my project was a machine learning algorithm written in Python. My application data, on which the machine learning model was trained, was held in a local SQL Database. The application layer was implemented as code in NodeJS server that executed the ML algorithm, written within Python files, and displayed the Machine Learning results in HTML.
For my choice of cloud platforms I considered AWS, Google and Azure, and ultimately settled on Azure because Azure credit that’s offered through Microsoft’s Visual Studio Enterprise Subscription. It meant that I could port my Machine Learning application onto the cloud for no incremental cost!
There were two main tasks involved in porting my Machine Learning application to the Azure cloud:
(1) I had to migrate my ML training database to a Azure cloud DB
(2) Port my NodeJS functionality into a NodeJS environment within the Azure App Service
Microsoft’s developer guides made the experience of (1) porting the Database and (2) porting the NodeJS straightforward. However, I ran into my first significant porting challenge when I realised that the Azure cloud services doesn’t allow the equivalent of the NodeJS Child Process function to execute. I had to find an alternative means of running my Machine Learning python algorithm code!
Fortunately, Azure comes to the rescue through their offering of the Azure “Machine Learning Service”. Using Azure’s Machine Learning Service, I was able to create a Machine Learning Workspace (ML Workspace). The ML Workspace environment enables dataset importation, training ML models on imported datasets and deployment of models. The Azure ML Workspace models are created using Python, so I was able to repurpose my existing Python scripts, effectively eliminating the NodeJS Child Process invocation. Azure supports access to the model outcomes through a RESTful API. The REST API enabled me to query my model for new predictions.
The full support for Machine Learning applications on Azure meant I was then able to re-create my Machine Learning Model, deploy it, and utilise it for predicting new outcomes. I can scale my Azure services, if needed, as my datasets grow. I now have a much more flexible Machine Learning application, all with the help of Azure!
For all your Software Development needs please contact Aspira here.
Author: Alan Lehane, Aspira Software Developer