In this story for penetration test on kafka LOCUST is going to be applied. In each api call by locust a random data at current time will be produced into kafka. Faust is the real-time stream processing service, by it self can not be scaled; in order to process record in heavy loads during locust penetration test, Celery a distribued task queue does the real process in a scalable distributed queue.
This project repo could be found here
First postgres and kafka should be up:
Then api service:
scalable queue up:
In this story, a ML pipeline service for training machine learning model is going to be illustrated.
This platform consists of these services:
Let’s build it:
In this article the implementation of real-time bitcoin prediction and monitoring is going to be explained.
The project for this story has been developed in this repository.
For this job I use one of the cool open api and I am very thankful to them.
I want to ask for the price of bitcoin every 200 sec and I want to generate the predictive model every 24 hours, also I want that bokeh application get updated every 250 sec. Then I develop this docker image for it.
In this app first bitcoin prices will be inserted into csv file:
In this story, the process of developing a live chatroom with fastapi websocket and build it with docker and deploy it on heroku :))
This repository include all code, and docker files.
This class handles that messages deliver to the member of the specified chatroom.
In this article the development and deployment of a microservice is going to be explained. This microservice consists of:
The model has been developed based on random forest algorithm, the target is to predict the remained useful life…
In the first part the machine learning part is going to be described. This fraud detection project notebook could be found here.
Because of more simplicity of random forest model; in this project this model is employed to check if sent data is fraud or not!
The database for this real-time web application is Cassandra. This is the class for creating connection and key-space:
The deployment of the nameko microservice that explained HERE with kubernetes. As it mentioned in part 1, each of microservice component has its own docker image; therefore in this story the process of deploying them is explained.
In this story the whole process of building the microservice app with nameko that has both flask and fastapi as web service, is going to explained.
This project is developed on docker container and is going to be deployed with kubernetes. The repo for this project is here.
There are three services in this microservice app:
As it can be seen here, this service is going to calculate the Fibonacci of a number.
In this story I am going to deploy a restful API application that recommend musics based on favorite music, on heroku.
This recommender is a simple KNN content based recommender, that the songs information has been gathered from spotify.
This web application is going to be build with Fastapi and the whole application is build on minidev docker image.
This project is open source and it is deployed on heroku:
The recommendation are all handling in here. It just simply calculate the neighborhood between songs and selected song, then it will be sorted. …
As we see in the previous part of article, the main process of developing Fastapi to work with deepface model explained. Here, the process of deploy the application on heroku is going to be explained.
The process to deploy is shorly explained here: