ASCENT: ML Engineering from Foundations to Mastery
1,333 lecture slides. 10 modules. 320 hours. Zero to masters.
ASCENT is the open-source ML engineering programme from Terrene Open Academy. It takes working professionals from their first line of Python to production-grade ML systems with full governance. Every concept is taught at three depths simultaneously. Every equation is derived, not asserted.
| Foundation Ascent (M1-M5) | Summit Ascent (M6-M10) | |
|---|---|---|
| Level | Zero Python to production ML | Advanced to masters |
| Hours | 160h (40 lessons) | 160h (40 lessons) |
| Outcome | Deploy governed ML models | Build aligned AI agent systems |
Curriculum
Section titled “Curriculum”| # | Module | What You Master | Slides |
|---|---|---|---|
| 1 | Python & Data Fluency | Python from scratch, Polars, data profiling, visualization | 85 |
| 2 | Statistical Foundations | 20+ distributions, MLE, Bayesian inference, hypothesis testing, bootstrap, information theory | 131 |
| 3 | Feature Engineering & Experiments | CUPED variance reduction, DiD, causal forests, Double ML, 9 encoding methods, Boruta, leakage detection | 99 |
| 4 | Supervised ML | Complete model zoo (linear through CatBoost), XGBoost 2nd-order Taylor, bias-variance decomposition, conformal prediction | 83 |
| 5 | ML Engineering & Production | SHAP axioms + TreeSHAP, LIME, ALE, fairness (impossibility theorem), workflows, DataFlow, model registry, ensembles | 150 |
| 6 | Unsupervised ML & Pattern Discovery | K-means through HDBSCAN, EM/GMM (full derivation), PCA-SVD connection, t-SNE, UMAP, LDA, NMF, BERTopic, anomaly detection | 146 |
| 7 | Deep Learning | Linear regression as NN, backpropagation (full chain rule), parallelized training (data/model/pipeline/tensor), CNN, ResNet, Adam derivation | 100 |
| 8 | NLP & Transformers | BPE tokenization, Word2Vec (negative sampling derivation), LSTM gates, self-attention (why divide by sqrt d_k), transformer architecture, BERT, GPT, Flash Attention | 150 |
| 9 | LLMs, AI Agents & RAG | LLM landscape Q1 2026, 7 RAG architectures, hybrid retrieval, RAGAS evaluation, ReAct/Reflexion agents, multi-agent A2A, MCP protocol, Nexus deployment | 235 |
| 10 | Alignment, RL & Governance | LoRA/QLoRA, DPO (5-step derivation from RLHF), GRPO, PPO (clipped objective + GAE), Bellman equations, EU AI Act, PACT D/T/R governance, full platform capstone | 154 |
| Total | 1,333 |
Three teaching layers
Section titled “Three teaching layers”Every concept is presented at three depths:
| Layer | Marker | Audience | Example (Bias-Variance) |
|---|---|---|---|
| Intuition | Foundations | Zero-background professionals | ”Imagine throwing darts at a target. Bias is how far the center of your throws is from the bullseye. Variance is how spread out they are.” |
| Mathematics | Theory | Intermediate practitioners | E[(y-y_hat)^2] = Bias^2(y_hat) + Var(y_hat) + sigma^2, derived step by step |
| Research | Advanced | Masters+ / PhD holders | Double descent (Belkin et al., 2019): test error decreases past the interpolation threshold in over-parameterized models |
A banker and a PhD sit in the same classroom. Both leave having learned something they did not know.
What ships with the programme
Section titled “What ships with the programme”| Component | Count | Details |
|---|---|---|
| Lecture decks | 10 | Reveal.js HTML, three-layer depth, KaTeX math, speaker notes |
| Slides | 1,333 | Every equation derived, every algorithm stepped through |
| Exercises | 80 | Solutions + local Python + Jupyter + Colab (three-format consistency) |
| Datasets | 11 | Singapore-context: HDB resale 15M, taxi 50K, credit 100K, experiment 500K |
| Quizzes | 10 | 246 AI-resilient questions (context-specific, not recall) |
| SDK textbook | 163 tutorials | 83 Python + 80 Rust, basic to advanced |
Delivery formats
Section titled “Delivery formats”| Format | Location | Best for |
|---|---|---|
| Local Python | modules/ascent*/local/*.py | Full async, Nexus deployment |
| Jupyter | modules/ascent*/notebooks/*.ipynb | Interactive exploration |
| Google Colab | modules/ascent*/colab/*.ipynb | Zero-install, GPU access |
Not vendor lock-in
Section titled “Not vendor lock-in”ASCENT teaches industry-standard tools. The Kailash Python SDK (the Foundation’s open-source ML orchestration platform) provides governance and orchestration on top:
| What You Learn | Industry Standard | What Kailash Adds |
|---|---|---|
| Data | Polars (Apache Arrow) | DataExplorer: automated profiling, 8 alert types |
| Classical ML | scikit-learn, XGBoost, LightGBM, CatBoost | TrainingPipeline: orchestrated training + model registry |
| Deep learning | PyTorch | OnnxBridge: portable ONNX export |
| NLP | BERTopic, sentence-transformers | ModelVisualizer: interactive Plotly analysis |
| LLM agents | OpenAI / Anthropic / Groq APIs | Kaizen Delegate: structured output with cost budgets |
| Governance | EU AI Act / Singapore AI Verify | PACT: D/T/R accountability with operating envelopes |
If you move to a different stack, you keep the math, the scikit-learn, the PyTorch, and the architectural patterns.
Quick start
Section titled “Quick start”git clone https://github.com/terrene-foundation/ascent.gitcd ascentuv venv && uv synccp .env.example .env # API keys for M9-M10
# Your first exerciseuv run python modules/ascent01/local/ex_1.py
# View lecture deckopen decks/ascent01/deck.htmlLicense
Section titled “License”Apache 2.0 (code and exercises). CC BY 4.0 (lecture content). Use it, extend it, teach with it.