Using with MLflow
MLflow is an open-source platform for managing the ML lifecycle. This guide shows how to use MLOps Desktop alongside MLflow for enhanced experiment tracking.
Why Use Both?
Section titled “Why Use Both?”| MLOps Desktop | MLflow |
|---|---|
| Visual UI for building pipelines | Centralized experiment tracking |
| Local-first, no server needed | Collaborative team features |
| Quick prototyping | Production model registry |
| Individual workflows | Team-wide visibility |
Install MLflow
Section titled “Install MLflow”pip install mlflowStart the MLflow Server (Optional)
Section titled “Start the MLflow Server (Optional)”For the UI and persistent storage:
mlflow ui --backend-store-uri sqlite:///mlflow.dbThis starts a server at http://localhost:5000.
Integration Approaches
Section titled “Integration Approaches”Approach 1: Log from Script Nodes
Section titled “Approach 1: Log from Script Nodes”Add MLflow logging to your Script nodes:
import mlflow
# Set tracking URI (optional, uses ./mlruns by default)mlflow.set_tracking_uri("http://localhost:5000")
# Start a runwith mlflow.start_run(run_name="random-forest-v1"): # Log parameters (from your Trainer configuration) mlflow.log_params({ "model_type": "RandomForestClassifier", "n_estimators": 100, "max_depth": 10, "test_size": 0.2 })
# Train your model (use the model from pipeline) # model.fit(X_train, y_train)
# Log metrics (from Evaluator results) mlflow.log_metrics({ "accuracy": 0.967, "f1_score": 0.965, "precision": 0.968, "recall": 0.967 })
# Log the model mlflow.sklearn.log_model(model, "model")
# Log artifacts (like SHAP plots) # mlflow.log_artifact("shap_summary.png")Approach 2: Post-Export Logging
Section titled “Approach 2: Post-Export Logging”After exporting from MLOps Desktop:
import mlflowimport joblibimport json
# Load the exported model and metadatamodel = joblib.load("model.joblib")with open("model_meta.json") as f: meta = json.load(f)
# Log to MLflowwith mlflow.start_run(run_name=meta.get("model_type", "model")): mlflow.log_params(meta.get("hyperparameters", {})) mlflow.log_metrics(meta.get("metrics", {})) mlflow.sklearn.log_model(model, "model") mlflow.log_artifact("model_meta.json")Approach 3: Batch Logging Script
Section titled “Approach 3: Batch Logging Script”Create a script to log all your experiments:
import mlflowimport joblibimport jsonfrom pathlib import Path
def log_experiment(model_path: str, meta_path: str): """Log an MLOps Desktop export to MLflow.""" model = joblib.load(model_path) with open(meta_path) as f: meta = json.load(f)
with mlflow.start_run(run_name=Path(model_path).stem): # Log training date as tag mlflow.set_tag("training_date", meta.get("training_date", "unknown"))
# Log hyperparameters params = meta.get("hyperparameters", {}) mlflow.log_params(params)
# Log metrics metrics = meta.get("metrics", {}) mlflow.log_metrics(metrics)
# Log feature info mlflow.set_tag("n_features", meta.get("n_features", 0)) mlflow.set_tag("n_classes", meta.get("n_classes", 0))
# Log the model mlflow.sklearn.log_model( model, "model", registered_model_name=meta.get("model_type", "MLOpsModel") )
# Log metadata file mlflow.log_artifact(meta_path)
print(f"Logged {model_path} to MLflow")
# Log all models in a directorymodels_dir = Path("./exported_models")for model_path in models_dir.glob("*.joblib"): meta_path = model_path.with_name(f"{model_path.stem}_meta.json") if meta_path.exists(): log_experiment(str(model_path), str(meta_path))Workflow Examples
Section titled “Workflow Examples”Solo Data Scientist
Section titled “Solo Data Scientist”-
Build and iterate in MLOps Desktop
- Quick visual prototyping
- Hyperparameter tuning with Optuna
- Export best model
-
Log to local MLflow
- Track experiments over time
- Compare different approaches
- Store model artifacts
-
Deploy from MLflow
- Use MLflow’s deployment tools
- Or serve directly from exported model
Team Workflow
Section titled “Team Workflow”-
Each team member uses MLOps Desktop locally
- Personal experimentation
- Quick prototyping
-
Log successful experiments to shared MLflow server
- Team visibility
- Experiment comparison
-
Use MLflow Model Registry
- Stage models (Staging → Production)
- Track model lineage
- Coordinate deployments
MLflow Features to Use
Section titled “MLflow Features to Use”Experiment Tracking
Section titled “Experiment Tracking”View all runs in the MLflow UI:
http://localhost:5000Compare metrics, parameters, and artifacts across runs.
Model Registry
Section titled “Model Registry”Register production models:
mlflow.register_model( "runs:/run_id/model", "ChurnPredictor")Transition between stages:
from mlflow.tracking import MlflowClientclient = MlflowClient()
client.transition_model_version_stage( name="ChurnPredictor", version=1, stage="Production")Model Serving
Section titled “Model Serving”Serve a registered model:
mlflow models serve -m "models:/ChurnPredictor/Production" -p 5001Call the endpoint:
curl -X POST http://localhost:5001/invocations \ -H "Content-Type: application/json" \ -d '{"inputs": [[5.1, 3.5, 1.4, 0.2]]}'Naming Conventions
Section titled “Naming Conventions”Use consistent names across MLOps Desktop and MLflow:
Pipeline Name: churn-random-forest-v2MLflow Run Name: churn-random-forest-v2MLflow Model Name: ChurnPredictorArtifact Organization
Section titled “Artifact Organization”Log additional artifacts from MLOps Desktop:
# In a Script nodemlflow.log_artifact("feature_importance.png")mlflow.log_artifact("confusion_matrix.png")mlflow.log_artifact("shap_summary.png")Tags for Organization
Section titled “Tags for Organization”Use tags to organize experiments:
mlflow.set_tags({ "source": "mlops_desktop", "dataset": "customer_churn_2024", "team": "data_science", "purpose": "production"})Filtering Runs
Section titled “Filtering Runs”Search runs programmatically:
from mlflow.tracking import MlflowClientclient = MlflowClient()
runs = client.search_runs( experiment_ids=["1"], filter_string="tags.source = 'mlops_desktop' AND metrics.accuracy > 0.95")
for run in runs: print(f"{run.info.run_name}: {run.data.metrics['accuracy']}")Limitations
Section titled “Limitations”What MLOps Desktop doesn’t do (use MLflow for):
Section titled “What MLOps Desktop doesn’t do (use MLflow for):”- Multi-user collaboration
- Centralized experiment server
- Model staging/approval workflows
- A/B testing infrastructure
What MLflow doesn’t do (use MLOps Desktop for):
Section titled “What MLflow doesn’t do (use MLOps Desktop for):”- Visual pipeline building
- Drag-and-drop ML
- Local-first, offline-capable training
- No-code model configuration
Conclusion
Section titled “Conclusion”MLOps Desktop and MLflow complement each other:
- Build and iterate in MLOps Desktop (visual, fast, local)
- Track and deploy with MLflow (collaborative, production-ready)
This combination gives you the best of both worlds: rapid local development with enterprise-grade experiment tracking.
Next steps: