Skip to content ↓

6.2050 – Field Programmable Gate Awesomeness by Andi Q. '25

Featuring my “aggressive and weird” cat robot

If I had a nickel for every time my final project for an MIT class was themed around cats, I’d have two nickels. Which isn’t a lot, but it’s weird pretty awesome that it happened twice.


This semester, I took 6.2050 (Digital Systems Laboratory, previously known as 6.111) – a class about using field programmable gate arrays (FPGAs) to build digital circuits for all sorts of applications, from real-time digital signal processing to high-speed telecommunications.

I signed up for the class because I really enjoyed its prerequisite class, 6.1910 (Computation Structures), where we learned how to design a CPU from scratch. 6.2050 is like 6.1910 but much more hands-on and intense – in addition to simulating our designs in software, almost everything in 6.2050 is built into physical hardware. Plus, my advisor Joe Steinmeyer01 Joe, if you’re reading this, Fatema asked me to ask you to bring back 6.08 teaches it, and he’s a great lecturer.

6.2050 was one of the most demanding classes I’ve taken at MIT, but it has also been one of the most rewarding. In just a few short weeks, I went from programming a button to turn on a light, to designing and building a robot that could learn to recognize people’s voices. And now that I’ve taken the class, I get to work with FPGAs02 Turns out FPGA engineering is highly sought after in all sorts of industries, from chip design to high-frequency trading over the summer!

Ok, but first – what is an FPGA?

Short answer: “a big programmable circuit”. Feel free to skip this section, but click this block to read a more technical explanation

If you’re like me a few months ago, the short answer probably raises more questions than it answers. Aren’t regular computers already big programmable circuits? And computers these days are already so powerful, so why even bother using FPGAs at all?

To answer those questions, it helps to understand how the computer/phone/smart fridge you’re using to read this post works. At the heart of any general-purpose computer is the CPU – a device responsible for fetching and executing program “instructions” that tell the computer what to do. Almost every modern CPU uses the von Neumann architecture, in which instructions are stored and fetched from computer memory instead of being hard-wired as a physical circuit.

The von Neumann architecture is great because it’s super flexible and allows us to reprogram computers easily. It was invented in 1945, and it’s still used pretty much everywhere! Yet despite its ubiquity, it has a few serious limitations:

  • It can be slow because each instruction needs to be fetched from memory, and computer memory is usually much slower than a typical CPU.
  • It consumes quite a lot of power because computer memory hogs a lot of energy.
  • It executes instructions sequentially instead of in parallel, which limits the performance of highly parallelizable tasks like matrix multiplication (which is used heavily in machine learning).

On the other end of the computing spectrum, we have application-specific integrated circuits (ASICs) – specialized high-speed circuits designed for one specific task (e.g. mining cryptocurrency). ASICs address these three problems but also come with their own set of flaws – they often take years to develop, and (very importantly) you can’t reprogram them once they’re fabricated.

And that’s why we use FPGAs! Instead of using a set of instructions stored in memory like with a CPU, you’d program an FPGA as if you were wiring a custom circuit03 Technically the FPGA doesn’t route a physical circuit (the “circuit” is actually stored in very fast memory devices called SRAM), but it has the same effect specialized for some task.

FPGA vs CPU

Is it a CPU? Is it an ASIC? No, it’s an FPGA!

FPGAs give us the best of both worlds – the flexibility and reprogrammability of CPUs, with the speed and efficiency of ASICs. (In fact, they’re often used for simulating and verifying ASIC designs before the designs are sent off for fabrication.)

your did it

Yay thanks for making it through this technical section

Anyway, back to 6.2050

The first seven weeks of the class consisted of lectures and weekly lab assignments to teach us the fundamentals of digital systems design. Things like communication protocols (like how a Nintendo Switch can turn on the TV using HDMI), using analog devices like accelerometers04 There was a time in lecture when Joe asked “What was the first large-scale commercial use of MEMS accelerometers?”, and I confidently guessed “Pokewalkers from Pokemon HeartGold/SoulSiver”. The correct answer was car crash detectors in automatic airbag deployment systems in a digital setting, and how to break large, complex systems into smaller, more manageable components.

Labs were done in SystemVerilog – a “hardware description language” that allowed us to describe circuit designs as if we were writing software. One of the coolest things about SystemVerilog was that we could (mostly) simulate these designs in software, which spared me dozens of hours of staring at an oscilloscope trying to debug them in vivo.

But as cool as it is to simulate circuit designs, the goal of 6.2050 was ultimately to implement these designs as hardware. Back in the 1980s (before FPGAs were mainstream), students taking the class would have to wire up individual logic chips, one logic gate at a time (which earned the class the nickname “digital death lab”). These days, we use Vivado – AMD’s software for synthesizing a circuit05 Somewhat like compiling code into individual machine instructions, but much more complicated from SystemVerilog code that can then be “flashed” (programmed) onto the FPGA. It was pretty amazing how it enabled us to build systems with thousands of logic gates instead of just a few dozen like in the 1980s06 Back then, a digital calendar would be an impressive final project; these days, you can probably build that in just a few hours because the tools we have are so much better now .

Although I wrote a lot of code in 6.2050, the whole FPGA engineering process was remarkably different from the software engineering process I was used to. Whereas many software frameworks pride themselves on being open-source and having superior developer tooling/documentation, AMD has near-monopoly power over the FPGA market and thus (what seems like) very little incentive to make Vivado pleasant to use07 Or maybe they’re just not that good at UI design . Most tools were vendor-locked or required hefty licensing fees. And because hardware engineering is rooted in physics, the whole process is slower overall. But because of these frustrations, it was so much more satisfying when things worked. Never before had I felt such satisfaction from pushing a button and having an LED light up.

I thought the class was very reasonably paced during these seven weeks. The labs were relatively straightforward and had clear implementation instructions, but also plenty of room to figure things out independently.

But that all changed when the final project attacked.

*panic*

The last six weeks of the class were dedicated entirely to working on the final project. The course website calls the final project an “opportunity to work on a small digital system08 I suppose you could call it small when compared to something like the Apple M3 chip ”, but 6.2050 has a reputation for having hardcore final projects that look like they should’ve taken at least a few months to build. A few past examples include:

After watching a few videos showing off these projects, my teammate Richard and I were faced with the daunting task of coming up with a similarly impressive yet still achievable project.  Unlike some other project-based classes like 6.2500 (Micro/Nano Processing Technologies), a significant chunk of the 6.2050 final project grade depended on whether the system ended up working09 I think this is one of 6.2050’s strengths – MIT students are often overly ambitious, and learning to set realistic goals is an important skill to have as an engineer , so we couldn’t be too ambitious. But we also wanted to make Joe proud (and maybe give imposter syndrome to inspire future 6.2050 students)…

To help with our ideation, we asked ourselves the following questions:

  • What can we10 Emphasis on “we” – Richard and I are not hardware engineering savants, unfortunately  make an FPGA do better than a von Neumann computer?
  • What hasn’t been done yet in the class’s 30+ year history?
  • What involves electrical engineering, computer science, and AI – all three facets of MIT EECS?

Unfortunately, the answer to all three questions was “Not a lot of things”, but after hours of brainstorming, we finally came up with something promising. Our conversation was something along the lines of:

Richard: “Hmm, I’ve read that FPGAs are really good at real-time audio processing, like sound localization.”

Andi: “Sound localization? What’s that?”

Richard: “Like you know how Amazon Echo devices know where you’re speaking from?”

Andi: “Oh, that’s pretty cool. Do you think voice recognition would be doable too?”

Richard: “Yeah, probably. That’s something else we could try.”

Abdullah (another friend who took the class with us): “What if… you have both?”

Richard: “Hmm, that’s an interesting idea. I guess we could also make it move toward the voice.”

Andi: “Oh, just like a cat! We could make it meow and chase lasers too!”

And that’s how Feline Programmable Gate Array was born! As the name suggests, the plan was to build a cat-like robot that could learn to recognize one person’s (the “owner”) voice and autonomously move toward the voice whenever it hears it. (And just like a real cat, it would ignore the owner sometimes.)

Schematic of the cat

AI-generated concept art of what we wanted to achieve

We were pretty excited about this project. It fit all three criteria I’d outlined previously, and the course staff seemed to enjoy the idea too. Now, all we had left was to implement it…

*panic (×2)*

In hindsight, we were slightly too aggressive when scoping out this project. Sound localization and voice recognition alone probably would have made for solid final projects, and our project had both plus some more. But after six weeks of grinding out SystemVerilog code and waiting for Vivado to finish synthesis, we were finally done.

So how did the “cat” turn out?

Surprisingly, very well! Thanks to thorough planning and great teamwork, the project did not end disastrously as Richard and I had initially feared. On the contrary, we managed to hit all our goals and had a (mostly) working prototype a few days before the deadline!

(Our demo video could have been a lot better, but we were severely time constrained oops…)

I could go on forever about technical details and how everything worked behind the scenes (and if you’re interested, the code lives here), but you’ll probably find it much more entertaining to read about things that went wrong. So for your enjoyment:

  • We spent about three weeks trying to get our microphones to work. Even the TAs were stumped! Then one day, Richard somehow discovered that the microphones were just appending extra bits of (junk) information to their output for no apparent reason. Discarding those bits miraculously fixed all our audio input problems.
  • While debugging the microphone thing, we also discovered (and fixed) a bug in a completely different project. Richard was using Manta (a previous 6.2050 TAs MEng project) as a debugging tool, but it would often crash randomly. We thought it was an issue with our code, but after poking around for a while, I discovered it was actually a bug in Manta itself. Luckily, it was a two-line fix.
  • Part of the project involved using a Bluetooth module for wireless communication. We could connect to the module from a laptop without any trouble but were initially unable to send or receive any information… because the “documentation”11 It wasn’t even proper documentation, but instead a circuit schematic of the FPGA board had mislabelled the module’s ports.
  • When we first tried connecting motors to the robot, they just… refused to move. We thought the motors were defective, but nope, they were actually crashing the FPGA by drawing too much power12 According to the board documentation, this was also not meant to happen , so we had to use an external battery pack to power them.
  • But even after fixing the power issue, our audio processing mysteriously stopped working. It turned out that the motors were making too much noise, which interfered with the audio processing. Richard eventually fixed the problem somehow (I think by switching to slower but quieter motors).
  • After putting everything together, we still needed a way to power the robot without connecting it to a laptop. We had planned to use my power bank for that (already a pretty jank solution), but it refused to work, so we resorted to using… an iPad instead. Although the iPad worked reasonably well for a few minutes, we then learned the hard way that it was ever so slightly too heavy for the robots fragile cardboard frame. Coincidentally, my power bank decided to start working again after that.

These setbacks were certainly not fun to debug, but I’m still weirdly glad I got to experience all of them. Because they showed me that I can overcome such problems with perseverance (and an oscilloscope) and that this is the kind of work I want to do after graduation. (And besides, they gave us plenty to write about in our final report!)

  1. Joe, if you’re reading this, Fatema asked me to ask you to bring back 6.08 back to text
  2. Turns out FPGA engineering is highly sought after in all sorts of industries, from chip design to high-frequency trading back to text
  3. Technically the FPGA doesn’t route a physical circuit (the “circuit” is actually stored in very fast memory devices called SRAM), but it has the same effect back to text
  4. There was a time in lecture when Joe asked “What was the first large-scale commercial use of MEMS accelerometers?”, and I confidently guessed “Pokewalkers from Pokemon HeartGold/SoulSiver”. The correct answer was car crash detectors in automatic airbag deployment systems back to text
  5. Somewhat like compiling code into individual machine instructions, but much more complicated back to text
  6. Back then, a digital calendar would be an impressive final project; these days, you can probably build that in just a few hours because the tools we have are so much better now back to text
  7. Or maybe they’re just not that good at UI design back to text
  8. I suppose you could call it small when compared to something like the Apple M3 chip back to text
  9. I think this is one of 6.2050’s strengths – MIT students are often overly ambitious, and learning to set realistic goals is an important skill to have as an engineer back to text
  10. Emphasis on “we” – Richard and I are not hardware engineering savants, unfortunately back to text
  11. It wasn’t even proper documentation, but instead a circuit schematic of the FPGA board back to text
  12. According to the board documentation, this was also not meant to happen back to text