6 min read

26th April

Table of Contents

Tired

The End of Term 6: Monoliths, Human Friction, and the Final Grind

The semester is finally drawing to a close. If my desk is anything to go by, it is currently a burial ground of review slides covering everything from Bayesian Equilibria to Monte Carlo simulations.

Term 6 has been a massive undertaking. Between the heavy quantitative focus of my Engineering Systems and Design modules and the broader theoretical concepts, it has been a balancing act. As I stare down the barrel of finals week, I am not overflowing with absolute confidence, but I am pragmatic. I will manage.

Here is a look at how everything is wrapping up before the break.

The Core Grind: Systems, Strategy, and Learning

Before diving into the philosophical, there is the immediate reality of my core modules. The final weeks have been a blur of attempting to synthesise theory with application.

In Game Theory (40.316), the complexity ramped up significantly. We moved far beyond basic payoff matrices into Bayesian Games, double auctions, and applying backward induction to dynamic games. It was one thing to propose a VCG mechanism for “The Capstone Lab Dilemma” earlier this term, but it is entirely another to prepare for a final exam covering incomplete information and continuous types.

Simulation Modelling and Analysis (40.015) has been equally demanding. My screen has been permanently split between Python scripts and JaamSim models. Reviewing variance reduction techniques, like common random numbers and control variates, makes you realise exactly how delicate statistical output analysis can be. We have modelled everything from the classic Cookie Problem to complex Agent-Based systems, and tying it all together for the final is a heavy lift.

Then there is Statistical and Machine Learning (40.319). Preparing for this final means revisiting the Bias-Variance Trade-off, Principal Component Analysis, and the dense mathematics behind Neural Networks. The presentation my team and I did earlier on Multi-Agent Reinforcement Learning (MARL) feels like ages ago, but tearing apart those algorithms has solidified my goal to step into a Data Scientist role after graduation.

Digital Worlds and Failing Machines

Amidst all the quantitative heavy lifting, one of the most impactful parts of this term came from an unexpected place, which is my Humanities, Arts, and Social Sciences (HASS) module, Digital Worlds, Space and Spatialities: Geographical Perspectives on Digitalisation.

As an engineering student, my default setting is to view technology as an engine for optimisation. We design systems to reduce friction. However, writing my final papers for this class forced me to confront the unsettling flip side of this pursuit. I visited the “Reworlding” exhibition and spent time with a sound installation called “Litany for a failing Machine”. Listening to a degrading, recursive sermon delivered by a rough AI-cloned voice next to a physical monolith made my skin crawl. It stripped away the polished veneer of technological progress and exposed the fragile, extractive reality underneath.

It made me reflect heavily on my own habits. I have spent this term using tools like Cursor on my Ubuntu setup to speed through coding tasks. While it is incredibly fast, there is a real fear of cognitive de-skilling. When a complex script breaks, the instinct to just ask an AI to fix it rather than debugging the data frame myself is a dangerous trap. We risk outsourcing our critical thinking to un-auditable black boxes.

The Value of Human Friction

This HASS module also completely reframed how I look at group work and project development. In the tech world, there is a constant push for frictionless interactions.

However, when building something that matters, that friction is entirely necessary. Our team recently received some fantastic news regarding our project, Kinetic 104. We have officially been awarded the opportunity to develop it for our actual Entrepreneurship Capstone next semester.

When designing assistive health technology for seniors, an AI cannot replicate the vital, messy aspects of human collaboration. An algorithm might suggest the most technically efficient sensor on paper, but it is the frustrating human debate and the friction of group work that forces us to consider how the seniors at the Watchman Home will actually interact with the device physically. That friction is a feature, not a bug.

GenEd and the Attention Economy

These reflections also tied back into my work with GenEd. When we pitched for the SUTD BabyShark Grant to build our educational tool, we faced a constant ethical tension. We have to employ gamified hooks to keep users engaged, but if we push those hooks too far, we are just participating in the extractive attention economy and cheapening the education itself.

It is a fine line to walk, but recognising that technological authority is an illusion is the first step toward building systems that actually serve people rather than exploit them.

Reclaiming Time

Right now, the priority is getting through these final papers and exams.

Once the last submission is uploaded, I am fully disconnecting. The crunch culture is something I have been trying to unlearn since my internship, and taking a proper break is mandatory. I have some personal travels planned to get away from the screens and clear my head.

Until then, I will be retreating into Westeros in my fleeting moments of free time, making my way through the A Song of Ice and Fire series. The politics of King’s Landing are surprisingly relaxing compared to the complexities of System Dynamics.

Term 6, you have been gruelling. Let’s get this over with.


💡

Rest is not idleness, and to lie sometimes on the grass under trees on a summer’s day, listening to the murmur of the water, or watching the clouds float across the sky, is by no means a waste of time.

— John Lubbock

  [······              ]