Last week, I wrote about customizing Git commands and shell functions to streamline the mechanics of submitting coursework. That article grew from my experience with Harvard's “CS50: Introduction to Artificial Intelligence with Python” course. Yesterday I submitted my work for what will be my final graded assignment and now today, I want to share a deeper reflection on the course itself, what I learned, and how it transformed my understanding of AI and machine learning.
Why CS50 AI?
My decision to enroll in CS50 AI was driven by two factors. First, I wanted to deepen my understanding of AI fundamentals - neural networks, decision trees, and inference engines. Second, I'll admit it: having a Harvard course on my CV was pretty appealing! What I didn't expect was how thoroughly this course would challenge my existing technical knowledge and reshape my understanding of how mathematics and logic can be applied to create intelligent systems.
The Course Structure: From Comfort to Challenge
The course is organized into seven sections, each building upon the previous:
Search
Knowledge
Uncertainty
Optimization
Learning
Neural Networks
Language
Each is comprised of a roughly two-hour lecture and a couple of projects that require an understanding of the info covered in the lecture. The progression of the sections has been thoughtfully designed. The early sections covered familiar territory - algorithms I've used throughout my career in software development. While not exactly remedial, these initial projects provided a comfortable entry point. However, around the midway point, the course took a dramatic turn into more challenging territory.
The Deep End: TensorFlow and Beyond
The latter half of the course introduced concepts that were almost entirely new to me. Working with TensorFlow, understanding pooling layers, and training models from scratch pushed me well outside my comfort zone. I found myself in that familiar developer predicament: getting code to work without fully grasping the underlying concepts.
With that, allow me to make a confession: there were moments, particularly in the final projects, where I focused more on making the code pass the unit tests than truly internalizing the concepts. It's a common trap in software development - the pressure to "make it work" sometimes overshadows the deeper learning opportunity. It's a work habit that pays off in the commercial world of shipping features on time and under budget but in a classroom setting I should have slowed down and absorbed the lesson. I've already committed to revisiting these later projects with the goal of truly understanding rather than just implementing. I'm glad I kept all those local repos I talked about in the last article!
Mathematical Revelations
Having a professional history where my mathematical applications have been pretty straightforward - recursive tree traversals, CSV data manipulation, linear progressions, maybe a heat map or the occasional candlestick chart - the course opened my eyes to how mathematical principles can be applied in non-numeric ways to draw inferential conclusions. The journey from familiar boolean logic to AI reasoning was fascinating.
It started with propositional logic, something familiar to any developer who's written an if-statement. But instead of just using AND and OR operators to control program flow, we used them to build knowledge representations. Converting complex statements into conjunctive normal form (basically, a standardized way of expressing logical statements) became a building block for creating inference engines that could actually reason about information.
The real mind-bender came with probability theory. Sure, I'd used random number generators and calculated percentages, but this was different. We dove into conditional probabilities - understanding how the likelihood of one event depends on another. It wasn't just about calculating odds; it was about modeling uncertainty in a mathematically rigorous way. This is where Bayesian concepts clicked for me. I'd seen the term "Bayesian" countless times in tech articles and documentation, but now I finally grasped how it enables machines to update their understanding as new evidence comes in - just like humans do.
These concepts weren't just academic exercises. They became the foundation for building inference engines that could:
Draw logical conclusions from a set of rules (like a legal expert system)
Update beliefs based on new evidence (like a diagnostic system)
Make decisions under uncertainty (like a game-playing AI)
What amazed me was how these mathematical foundations, which start with simple boolean logic every programmer knows, build up to create systems that can actually learn and reason. The way mathematical rules and statistical concepts are woven into AI algorithms to create intelligent behavior was a revelation. I can easily see how these concepts have become staples in AI and ML, and I'm excited to see where they'll take me in the future, both professionally and personally.
Beyond CRUD: A New Programming Paradigm
One of the most profound shifts in my understanding came from experiencing how fundamentally different AI/ML programming is from traditional application development. In my day job, I write a lot of CRUD applications - forms that Create, Read, Update, and Delete data. The logic is explicit: if a user clicks this button, store that data; if this condition is met, show that message. It's deterministic and straightforward.
But AI programming? That's a whole different animal.
Creating Systems That Learn from Data
Take the Nim project, for example. In traditional programming, I might create a game where player moves are governed by explicit rules: "if the player takes the last piece, they lose." But in creating an AI that learns to play Nim, I wasn't programming specific moves - I was creating a system that could learn from experience.
The difference is striking. Instead of writing rules like:
if last_piece_taken:
declare_winner(other_player)
I found myself designing systems that could develop their own strategic understanding through repeated play. The AI doesn't just follow rules - it learns patterns, develops strategies, and improves its play over time. This was my first real taste of moving from telling a computer exactly what to do, to teaching it how to learn.
Developing Models for Prediction
The Shopping project further illuminated this paradigm shift. In a traditional e-commerce application, you might write code to track user behavior:
if items_in_cart > 0 and session_time > 30:
show_discount_offer()
But with machine learning, we're doing something far more sophisticated. Instead of hard-coding rules about when to show offers, we're creating models that can predict purchasing behavior based on patterns in historical data. The system learns to recognize complex combinations of factors - browsing patterns, time spent on pages, user demographics - that no human could reasonably program as explicit rules.
This shift from deterministic programming to probabilistic thinking was challenging. It required embracing uncertainty and thinking in terms of probabilities rather than absolute rules.
Building Self-Improving Systems
Perhaps the most fascinating aspect was working on the Traffic project, which involved computer vision for recognizing road signs. In traditional image processing, you might write code to detect edges, measure shapes, or match patterns. But with neural networks, the system learns to recognize features on its own through training.
What struck me was how the system's performance could improve without any changes to its code. Traditional applications only improve when a developer modifies them, but these AI systems get better through exposure to more data and training. It's a completely different way of thinking about software development.
The Mental Shift
This transition wasn't just about learning new tools or frameworks - it required a fundamental shift in how I think about programming:
From writing explicit instructions to designing learning environments
From debugging specific logic to understanding model behavior
From deterministic outcomes to probabilistic predictions
From immediate results to iterative training and improvement
The challenges were significant. When something goes wrong in traditional programming, you can step through the code, examine variables, and pinpoint the issue. But with AI systems, understanding why a model makes certain predictions or decisions is often more complex and requires a different kind of debugging mindset.
Key Takeaways and Future Plans
The Theory-Practice Gap: Understanding AI concepts theoretically is vastly different from implementing them. The course makes this clear and helps bridge this gap.
The Importance of Fundamentals: While modern AI tools abstract away much complexity, understanding the underlying principles is crucial for effective implementation.
The Learning Never Stops: Even after completing the course, I've identified areas where I need to deepen my understanding, particularly in neural networks and deep learning concepts.
Looking Forward
I've already planned my next steps. The later projects - particularly those involving TensorFlow and neural networks - deserve a deeper dive. This time, without the pressure of deadlines and grades, I can focus on truly understanding these powerful concepts. And a couple of the projects I had to submit for a grade have sparked some ideas about a possible utility agent or two.
For other professional developers considering this course, I'd offer this advice: don't rush through it. The temptation to "just get it working" is strong, especially if you're an experienced developer. Resist that urge. The real value lies not in completing the projects, but in understanding why they work the way they do.
This course will stretch you. It definitely stretched me. It starts with familiar territory but quickly pushes into areas that'll make you question what you think you know about programming. If you're considering taking it, go for it - but do yourself a favor and resist the urge to just make the tests pass. Take the time to understand the concepts. The credential's nice, sure, but the real value is in grasping how these systems actually work. Trust me, you'll find yourself revisiting these ideas long after you've submitted that final project.