ML microservice with Nameko to implement a predictive maintenance application
In this article the development and deployment of a microservice is going to be explained. This microservice consists of:
web service that receives data for prediction
This microservice has been built with Nameko and deployed with docker-compose.
ML Development
This section of the project has been done in kaggle and based nasa data turbofan dataset. This state of art work could be found here!
The model has been developed based on random forest algorithm, the target is to predict the remained useful life based on current state and is export into joblib-file to be used in ml-service!
ML Deployment
This part of project is a nameko service that get the prediction from model based on input data!
Restful API
This service is built with fastapi and will ask ml-service to calculate the remained cycle of turbofan.
Hbase service
This service is will store data into hbase, the docker file has been developed here!
First this table at start will be created!
Then in nameko service the saving and querying data would be like this:
And finally in will represented like this in nameko:
Dashboard service
This service is going to push the data into kafka topic, the logstash will read the data from kafka topic and save it into elasticsearch the it will be represented in kibana!