The document discusses training and deploying machine learning models with Kubeflow and TensorFlow Extended (TFX). It provides an overview of Kubeflow as a platform for building ML products using containers and Kubernetes. It then describes key TFX components like TensorFlow Data Validation (TFDV) for data exploration and validation, TensorFlow Transform (TFT) for preprocessing, and TensorFlow Estimators for training and evaluation. The document demonstrates these components in a Kubeflow pipeline for a session-based news recommender system, covering data validation, transformation, training, and deployment.
2. About us
Gabriel Moreira
Lead Data Scientist - CI&T
Doctoral Candidate - ITA
@gspmoreira
Rodrigo PereiraFĂĄbio Uechi
Data Scientist - CI&T
Masterâs Student - UNICAMP
@fabiouechi
ML Engineer - CI&T
3. DRIVEN BY
IMPACT
We are digital transformation agents
for the most valuable brands in the
world, generating business impact for
all projects we lead.
4. Investing in Machine
Learning since 2012
Recognized Expertise
Google ML Specialized Partner
TensorïŹow.org Reference
ciandt.com
Cognitive
Solutions
End-to-End
Machine Learning
Capabilities
5. AGENDA
â Motivation
â Kubeflow
â TFX (TensorFlow Extended)
â Demo - News Recommender System
â Data validation
â Transform
â Model training and evaluation
â Deploy
â Demo - ML models serving and monitoring
8. MOTIVATION
Prototype MVP With Demo In Jupyter
Notebook: 2 Weeks
Demo with front-end mockup with
blog post: +3 Days
Experiments.Github.Com: +3 Months
https://github.com/hamelsmu/code_search https://towardsdatascience.com/semantic-code-se
arch-3cd6d244a39c
https://experiments.github.com/
10. Reality: ML requires DevOps; lots of it
ConïŹguration
Data Collection
Data
VeriïŹcation
Feature Extraction Process Management
Tools
Analysis Tools
Machine Resource
Management
Serving
Infrastructure
Monitoring
ML
Code
Source: Sculley et al.: Hidden Technical Debt in
Machine Learning Systems
11. Less devops work
Let data scientists and ML
engineers focus on models & data
Source: Monica Rogattiâs Hierarchy of Needs
14. A curated set of compatible tools and artifacts that lays a
foundation for running production ML apps on top of
Kubernetes
15. What is Kubernetes ?
Greek for âHelmsmanâ; also the root of the word
âGovernorâ
â Container orchestrator
â Runs containers
â Supports multiple clouds and bare-metal environments
â Inspired and informed by Googleâs experiences and internal
systems
â Open source, written in Go
â kubernetes.io
Manage applications, not machines
16. Kubeflow: A platform for building ML products
â Leverage containers and Kubernetes to solve the challenges of building ML products
â Reduce the time and eïŹort to get models launched
â Why Kubernetes
â Kubernetes runs everywhere
â Enterprises can adopt shared infrastructure and patterns for ML and non ML services
â Knowledge transfer across the organization
â Kubeflow is open
â No lock in
â 120+ Members
â 20+ Organizations
â Stats available @ http://devstats.kubeflow.org
17. ML Components
â Goal: components for every stage of ML
â Examples:
â Experimentation / Data Exploration
â Jupyter/JupyterHub
â Training
â K8s CRDs for distributed training for
PyTorch & TFJob
â Katib - For HP Tuning
â Workflows:
â Pipelines
â Feature Store
â Feast (from GOJEK)
â Serving
â Seldon, TF and NVIDIA RT
25. Challenges
News Recommender Systems
1. Streaming clicks and news articles
2. Most users are anonymous
3. Usersâ preferences shift
4. Accelerated relevance decay
Percentile of clicks Article age
10% up to 4 hours
25% up to 5 hours
50% (Median) up to 8 hours
75% up to 14 hours
90% up to 26 hours
26. Factors aïŹecting news relevance
News Recommender Systems
News
relevance
Topics Entities Publisher
News static properties
Recency Popularity
News dynamic properties
News article
User
TimeLocation Device
User current context
Long-term
interests
Short-term
interests
Global factors
Season-
ality
User interests
Breaking
events
Popular
Topics
Referrer
27. News session-based recommender overview
CHAMELEON
User session clicks
C1
C2
C3
C4
Next-click prediction
(RNN model)
Article B
Article A
Article C
Article D
...
Ranked articles
Candidate (recommendable) articles
28. Article
Context
Article
Content
Embeddings
Next-Article Recommendation (NAR)
Time
Location
Device
User context
User interaction
past read articles
Popularity
Recency
Article context
Users Past
Sessions
candidate next articles
(positive and neg.)
active article
Active
Sessions
When a user reads a news article...
Predicted Next-Article Embedding
Session Representation (SR)
Recommendations Ranking (RR)
User-Personalized Contextual Article Embedding
Contextual Article Representation (CAR)
Active user session
Module Sub-Module EmbeddingInput Output Data repositoryAttributesLegend:
Article
Content
Embedding
28
Recommendations Ranking
(RR) sub-module
Eq. 7 - Loss function (HUANG et al., 2013)
Eq. 4 - Relevance Score of an item for a user session
Eq. 5 - Cosine similarity
Eq. 6 - Softmax over Relevance Score (HUANG et al., 2013)
Recommended
articles
What goes inside the box?CHAMELEON
30. TensorFlow Extended
TFX is set of libraries that helps you to implement a scalable and high-performance machine learning
pipeline that might includes the steps: data preprocessing, modeling, training, serving inference, and
managing deployments to online, mobile and JavaScript targets.
Main Components:
â TensorFlow Data Validation (TFDV)
â TensorFlow Transform (TFT)
â TensorFlow Model Analysis (TFMA)
OBS: Apache Beam is required to build any TFX pipeline.
33. TFDV - TensorFlow Data Validation
TensorFlow Data Validation (TFDV) is a library for data exploration and validation.
TFDV includes:
â Scalable calculation of summary statistics of training and test data.
â Integration with a viewer for data distributions and statistics
â Automated data-schema generation to describe expectations about data like required values, ranges,
and vocabularies
â Anomaly detection to identify anomalies, such as missing features, missing values, out-of-range
values, wrong feature types, distribution skewness
34. def analyse(input_data_list, top_n, offset=24):
logger.info('Infer data schema from first file')
stats = tfdv.generate_statistics_from_csv(
data_location=input_data_list[0])
inferred_schema = tfdv.infer_schema(statistics=stats)
logger.info("Inferred schema n {}".format(inferred_schema))
curr_stats = stats
for file_i in range(offset, top_n, 1):
logger.info('Checking for anomalies between {} and {}'.format(
input_data_list[file_i-offset], input_data_list[file_i]))
future_stats = tfdv.generate_statistics_from_csv(
data_location=input_data_list[file_i])
for feat_name in ["click_article_id",
"session_start",
"click_timestamp",
"click_region",
"click_environment",
"click_country",
"click_os",
"session_size",
"session_id",
"click_deviceGroup",
"user_id",
"click_referrer_type"]:
feature = tfdv.get_feature(inferred_schema, feat_name)
feature.skew_comparator.infinity_norm.threshold = 0.01
feature.drift_comparator.infinity_norm.threshold = 0.01
anomalies = tfdv.validate_statistics(previous_statistics=curr_stats,
statistics=future_stats, schema=inferred_schema)
n_anomalies = len(anomalies.anomaly_info.items())
if n_anomalies == 0:
logger.info('No anomalies found')
else:
logger.warn('{} anomalies found')
for feature_name, anomaly_info in anomalies.anomaly_info.items():
logger.info("Feature {} Anomaly: {}".format(
feature_name, anomaly_info.description))
curr_stats = future_stats
36. TFT - TensorFlow Transform
A library for preprocessing data with TensorFlow. TensorFlow Transform is useful for data that requires a full-
pass transformations, such as:
â Input normalization.
â Convert strings to integers by generating a vocabulary over all input values.
Goal: Write transform function only once and use it both on training and serving.
OBS: Currently FixedLenSequenceFeature are not supported
37. def feature_spec_schema():
""" Feature specification schema
"""
schema_dict = {}
for feat, feat_type in [('user_id', tf.int64),
('session_id', tf.int64),
('session_start', tf.int64),
('session_size', tf.int64),
]:
schema_dict[feat] = tf.FixedLenFeature([], dtype=feat_type)
for feat, feat_type in [('click_timestamp', tf.int64),
('click_article_id', tf.int64),
('click_environment', tf.int64),
('click_deviceGroup', tf.int64),
('click_os', tf.int64),
('click_country', tf.int64),
('click_region', tf.int64),
('click_referrer_type', tf.int64)]:
schema_dict[feat] = tf.VarLenFeature(dtype=feat_type)
schema = dataset_metadata.DatasetMetadata(
dataset_schema.from_feature_spec(schema_dict))
return schema
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import metadata_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
46. tft_metadata = TFTransformOutput(FLAGS.tft_artifacts_dir)
model = build_estimator(model_output_dir, article_embeddings_matrix,
articles_metadata, articles_features_config, ...)
model.train(input_fn=lambda: prepare_dataset_iterator(training_files_chunk,
tft_metadata, batch_size=FLAGS.batch_size, ...))
model.evaluate(input_fn=lambda: prepare_dataset_iterator(eval_file,
tft_metadata, batch_size=FLAGS.batch_size, ...)
predictions = model.predict(input_fn=lambda:
prepare_dataset_iterator(tfrecords_files, tft_metadata,
FLAGS.batch_size, ...)
Training, Evaluating and Predicting with the Estimator
47. def prepare_dataset_iterator(files, tft_metadata, batch_size=128, ...)
feature_spec = tft_metadata.transformed_feature_spec()
# This makes a dataset of raw TFRecords
dataset = tf.data.TFRecordDataset(path, compression_type='GZIP')
dataset = dataset.map(lambda x: tf.io.parse_single_example(x, feature_spec))
dataset = dataset.padded_batch(batch_size, padded_shapes=features_shapes)
# Define an abstract iterator that has the shape and type of our datasets
iterator = ds.make_one_shot_iterator()
# This is an op that gets the next element from the iterator
next_element = iterator.get_next()
return next_element
Defining input function
Features schema come from TFT!
48. def export_saved_model(model, model_output_path, additional_features_info, tft_metadata):
raw_feature_spec = feature_spec_schema()
def serving_input_fn():
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
# Apply the transform function that was used to generate the materialized data.
raw_features = serving_input_receiver.features
transformed_features = tft_metadata.transform_raw_features(raw_features)
for feature_name in transformed_features.keys():
if type(transformed_features[feature_name]) == tf.sparse.SparseTensor
transformed_features[feature_name] = tf.sparse.to_dense(
transformed_features[feature_name])
return tf.estimator.export.ServingInputReceiver(
receiver_tensors=serving_input_receiver.receiver_tensors,
features=transformed_features)
servable_model_path = model.export_savedmodel(
model_output_path, serving_input_fn, strip_default_attrs=True)
return servable_model_path
Defining serving function and exporting SavedModel
Apply transforms
from TFT graph
50. TFMA - Model Analysis
TensorFlow Model Analysis allows you to
perform model evaluations in the TFX pipeline,
and view resultant metrics and plots in a
Jupyter notebook. SpeciïŹcally, it can provide:
â Metrics computed on entire training and
holdout dataset, as well as next-day
evaluations
â Tracking metrics over time
â Model quality performance on different
feature slices
â Supports evaluation on large amounts of
data in the distributed manner