The advantage of being predictive on maintenance is multifold. It allows for a fast and targeted response to potential incidents to reduce downtime of services and operational costs. By decreasing unscheduled maintenance requirements, predictive maintenance also improves productivity, as well as the quality of services for your customers. Our recent use case for Fednot (Royal Federation of Belgian Notaries) is one such example, in which we created a smart monitoring dashboard with Azure and Power BI to help them integrate artificial intelligence into their business.
The purpose of this project was not only to build the proof of concept with their application logging and monitoring data but also to introduce Fednot’s infrastructure team to the different flavours of Azure AI services suitable for all levels of knowledge and experience in AI. In the following blogpost, we will briefly go through the development mindset and reasoning.
Converting unstructured data to structured data
Log files are text–heavy and unstructured, meaning that the data is raw and not very useful yet. Our first mission was to identify the hidden information related to the final goal (predictive maintenance). To start, we were looking at attaching some sort of pattern to make the unstructured data structured—often presented as tables or key-value pairs. There are virtually countless possibilities to make the conversion, most of them are not suitable for solving our problem. The most ideal way to structure our attributes is to make them identical to the factors for our prediction target. In practice, this probably won’t be the case, but we want to get as close as possible. The bottom-line is that it is simply not possible to make good models without extracting the right information from the raw data.
The extracted attributes are called the features, representing a specific interpretation of the data. The objective at this stage is to define our features as close to the true factors as possible. Machine learning models are good at solving complex and multidimensional problems; however, their training process could get inefficient given too many training features, hence the increasing computing cost and decreasing performance. On the other hand, oversimplified training features are likely to miss the indirect factors which are not obvious to humans, also causing poor performances. Another challenge to extract features from text-rich data is the semantic unit. A sentence is a sequence of phrases, which are the combinations of words, which are the sequences of letters. Making the features syntactically general might be ambiguous, leading to poor predictive capability. Contrarily, making the features too specific could result in a large number of trivial features. Having a good balance of the features is crucial for a successful model. It often requires a deep understanding of the data and the business problem and needs to be adjusted recurrently throughout the development lifecycle of a machine learning model.
Regarding this matter, we created a transitional dashboard for feature exploration, allowing users to inspect mock-up results by selecting different combinations of features in the pre-defined feature pool. A retrieval table is also made available to trace back to the original sentences where the selected features appear. This enables Fednot to bring their valuable business understanding to the development without needing to code at all.
Azure AI Services
Once the feature extraction is done, the next step will be to bring out the insights from the processed data and ingest them to the final dashboard for monitoring or business decision making. Azure provides a variety of cloud services for AI implementation and integration for all skill levels of developers. These services are the building blocks, which can be easily integrated, for a complete AI-infused solution so that you don’t need to build everything from scratch. In this project, we built the data pipeline on Azure Databricks (analytics platform, requiring data expertise and code-heavy), incorporated the anomaly detection feature by making calls to the Anomaly Detector API from Cognitive Services with only a few lines of code (prebuilt models, manageable for the general programmer and low-code), and created a Power BI dashboard for the intuitive and dynamic visualization of the results (summarized insights, for decision-makers and no-code). The Azure ecosystem brings AI within reach of every developer—without requiring machine learning expertise. In essence, using AI for businesses is no longer the privilege of big tech companies.
In the following, we will elaborate on the two services we used for this project.
The Anomaly Detector, in a nutshell, takes any time-series dataset as input and automatically chooses the appropriate model to fit the data. As a result, the expected values, boundaries, and the abnormalities of the time-series data will be returned. The information provides useful insights for both monitoring and predicting purposes. For example, the infrastructure team will get notifications when the number of requests is higher than some dynamic threshold to avoid overloading by scaling up the servers. Furthermore, they can use the data to build a predictive model that triggers auto-scaling pre-emptively. With the anomaly detection added on top of their current monitoring system, it only takes a handful of lines of code but improves the system reliability remarkably.
Power BI is a cloud-based tool that requires no capital expenditure or infrastructure support regardless of the size of a business. It integrates seamlessly with existing applications, especially in the Microsoft environment. We wrapped up the proof of concept with a user-friendly and interactive dashboard in Power BI, which is ideal for insights presentation and sharing. What’s nice about it is that people can play around with the parameters and slicers to explore the analytics results intuitively, which contributes to smart business decisions in the end.
“Predictive models learn patterns from historical data and predict future outcomes with certain probability based on these observed patterns. A model’s predictive accuracy depends on the relevancy, sufficiency, and quality of the training and test data.”
In conclusion, the introduction of relatively simple AI services contributed very well to the overall goal of setting up a predictive maintenance culture at Fednot. Even with a limited effort, Fednot gained a lot of additional value from their monitoring data. In its current phase, the project is aimed more at anomaly detection than it is at predictive maintenance, but it is a first step towards building a data-driven infrastructure environment. It already allows Fednot to reduce unexpected downtime of applications and thus operational costs, it allows for incident pattern discovery and finally improves the quality of various applications and the overall customer experience.
Interested to know more? Get in touch!
– Fisher Kuan & Wouter Baetens