How to use TensorFlow Serving?
Thursday, 29 May 2025 by kenlpascual
TensorFlow Serving is an open-source system developed by Google for serving machine learning models, particularly those built using TensorFlow, in production environments. Its primary purpose is to provide a flexible, high-performance serving system for deploying new algorithms and experiments while maintaining the same server architecture and APIs. This framework is widely adopted for model deployment
- Published in Artificial Intelligence, EITC/AI/GCML Google Cloud Machine Learning, First steps in Machine Learning, Plain and simple estimators
Tagged under: Artificial Intelligence, Docker, GRPC, Model Deployment, Model Versioning, Production ML, REST API, SavedModel, TensorFlow Serving
What are the three types of production ML scenarios based on the rate of change in ground truth and data?
Saturday, 05 August 2023 by EITCA Academy
In the field of machine learning (ML) engineering for production ML deployments with TensorFlow Extended (TFX), there are three types of production ML scenarios based on the rate of change in ground truth and data. These scenarios are known as static, dynamic, and evolving ML scenarios. 1. Static ML Scenarios: In a static ML scenario,
- Published in Artificial Intelligence, EITC/AI/TFF TensorFlow Fundamentals, TensorFlow Extended (TFX), ML engineering for production ML deployments with TFX, Examination review
Tagged under: Artificial Intelligence, Machine Learning, ML Engineering, Production ML, TensorFlow, TFX