35. eAIおよびML Design Patternsの整理
Topology Programming Model operations
レジリエ
ント
サービ
ング
再現性
責任・説
明性
モデル
訓練
問題
表現
データ
表現
Hashed Feature Embeddings
Feature Cross Multimodal Input
Reframing Multilabel
Ensembles Cascade
Neutral Class Rebalancing
Useful Overfitting
Checkpoints
Transfer Learning
Distribution Strategy
Stateless Serving Function
Hyperparameter
Tuning
Batch Serving Continued Model Evaluation
Keyed Predictions
Windowed Inference
Repeatable Splitting
Transform
Bridged Schema
Two-Phase Predictions
Feature
Store
Model
Versioning
Heuristic Benchmark
Workflow Pipeline
Fairness Lens
Explainable
Predictions
Different Workloads in Different
Computing Environments
Distinguish Business Logic from ML
Models
ML Gateway Routing Architecture
Microservice Architecture for ML
Lambda
Architecture
Kappa
Architecture
Data Lake for ML
Parameter-
Server Abstraction
Data flows
up, Model
flow down
Secure Aggregation
Separation of Concerns
and Modularization of
ML Components
Discard PoC Code
ML Versioning
Encapsulate ML models
within Rule-base Safeguards
Deployable Canary Model
今後の拡充検討エリア