[ORAN] AI/ML workflow description and requirements ~1
名稱 |
描述 |
布建位置 |
ML Training Host |
The network function which hosts the training
of the model, including offline and online training. [註] online training 部分和 ML Inference Host 的功能重疊 |
Non-RT RIC, offline (using data collected
from the RIC, O-DU and O-RU) [註] offline 意指在 RIC 外完成 ML Model 訓練 |
ML Inference Host |
The network function which hosts the ML model
during inference mode (which includes both the model execution as well as any
online learning if applicable). |
The ML inference host often coincides with
the Actor. [註] Actor 和 ML Inference Host 差別: Actor 利用ML 演算法結果進行決策 (Action),
此決策可能基於多個 ML 演算法的輸出 |
Actor |
The entity which hosts an ML assisted
solution using the output of ML model inference. |
|
Model Management |
The network function that manages the ML models to be deployed in the inference host. [註] ML Model 的形式可以是 K8S 的 pod, 也可以是 ML 的模型檔案 (例: 網路參數) |
Model management manages models that are
onboarded directly from ML training host or those from ML compiling host when
model compiling is executed after training. |
Model Compiling Host |
Optional network function which compiles
the trained model into specific format for optimized inference execution in
an inference host based on the compiling information. |
Non-RT RIC can also be a compiling host. ML compiling can be performed offline |
留言
張貼留言