My Resume

My Projects

Here you can take a look at some of the projects that I have worked on, either as personal projects or for school.

This site is also sort of a project of its own, I suppose. I started with a template that gave me the nav/sidebar and the rest was created by me. I made it fully responsive, so please resize your browser to see the elements change and reorganize to fit the window size. Also, my favourite project is My Record Collection so don't miss it.


My Record Collection

Languages and Skills: C++, Qt, APIs, Multithreading, UI, UX
MRC Logo

My Record Collection (MRC) is the first large project I have taken up on my own. Using knowledge I gained in and outside of class, I decided to try and solve a problem I found myself facing when I was trying to pick a record from my record collection that I wanted to listen to; I could not decide from my collection what record I was in the mood to hear. I found my collection of around 70 records to be quite daunting when I wanted to listen to music. Sometimes I would even just give up and do something else. What I wished for was an app where I could see all my records and sort them by their genres and my own personal rating that I had for them. And if you know much about music—no, Discogs is not quite what I was looking for. However, I eventually integrated it into my app in a way.

I had just finished my first term of third year and in my CS3307: Objected-Oriented Design and Analysis course, we were tasked with coming up with our own idea for a software and making it in C++. We came across the Qt development framework and used it to make an (admittedly not very great) music player with playlists and a queue called Spotishy. So, during the winter reading week, I got to work to solve my problem using Qt. I had a rough idea of what I wanted it to do. It should be able to search and find any record using an API, I used last.fm. It should be able to save and record from last.fm locally. The user should be able to give these local records a rating out of 10 and any tag that they want. Finally, the user should be able to sort the collection by those ratings and tags.

Photo of the first version of MRC
Photo of the first version of MRC

MRC 1.0.0

This is what I ended up with for my first (mostly) working version of the app. It got the basics of what I wanted it to be able to do down. Adding record, tags, and ratings. I found myself actually wanting to use the app to pick what to listen to and I even started listening to my records more. I considered it a success.

Over the course of about half a year I continued to make incremental updates to the app with feedback from friends that saw and used the app and adding things I wanted it to be able to do. Below are features I added with each minor version increment.

v1.1.0

  • Suggested tags from genres on Wikipedia
  • Themes (dark and light)
  • Record counter

v1.2.0

  • Import Discogs collection (multi-threaded)
  • Saved preferences
  • Checkmarks in tag menus
  • Delete all records and tags
  • Reset filters
  • Pick a record for me
  • About and contact info

v1.3.0

  • Edit record attributes
  • Create a custom record
  • Progress bar for Discogs import
  • Tag count in filter list
  • Export data
  • Filter by rating

v1.4.0

  • Added "Released" and "Date Added" attributes
  • Resizable window
  • Hide columns from record table

After v1.4.0, I felt like I was mostly "done" so I spent some time figuring out how to compile the app on a Mac so I could give my Mac user friends a version they could use. Plus, I made some minor adjustments here and there that I wanted that didn't feel major. I made some performance improvements through threading and grouping elements, I added a loading animation for album covers, I improved the "sort by" dropdown to be less confusing, and finally I made a few big fixes.

Photo of the most recent version of MRC

MRC 1.4.5

MRC 1.4.5 is where I am currently at now with the app. I really like it, I use it all the time, and I always keep my collection up-to-date. Check out this version here to download and try it out. Just download the zipped "Source code" and run the precomplied app in the "My Record Collection-xxx" folder that matches your OS. To use the app, all you need is the folder matching your OS.

Photo of the most recent version of MRC

I don't currently have many other plans for it. I've thought about adding some kind of data analysis/graphing feature to show a user's collection in another way kind of Spotify Wrapped-esque because I especially like currently being able to see how many records of each genre I have as I find it interesting. I also think it would be really cool if the app was able to be an online service that had user data stored in the cloud, and you could sign in to it from any device and it synced across devices, but that's for v2.0.0 if it ever comes to fruition.

Part of my record collection
The other part of my record collection

My Collection

As a bonus, here is my physical collection. On the left are the records I listen to most and on the right, the least


Time Series Prediction with Recurrent Neural Networks

Languages and Skills: Python, Artificial Neural Networks, PyTorch, Time Series Prediction, Data Analysis

Full report pdf

With this project, I set out to create an AI model to forecast the attendance at the campus gym. During end of my third year, I started regularly going to the gym on campus. I eventually learned that the staff post live attendance updates every 30 minutes on Twitter. After initially finding it difficult to tell whether 80 in the weight room meant busy or not as I had no prior frame of reference, I thought I would write a script to scrape as much of their tweet data as possible and then graph it. With the python package I was using, I could scrape the most recent ~800 tweets. Running this once a month for 8 months, I accumulated over 5000 tuples from May 2nd, 2024 onward.

Rec centre usage over time of day by the day of the week
Rec centre usage over time of day by the day of the week

Usage by Day of the Week

I referred to this as my day of the week average model because it took every data point for each day of the week and hour and then simply averaged them together. This is the data I was most interested in initially as it helped me pick the best day and time to go to the gym. Looking back, I probably could have just googled "what day is the gym least busy?" but that's beside the point. I ended up picking early afternoon on Sundays. I also had charts for each month separately so I could see how different months were more or less busy.

Quickly, I realized I wanted to do more with the data I was accumulating. So, for my AI 2 final project, I challenged myself to learn about recurrent neural networks and create a long short-term memory (LSTM) model that could perform time series prediction and forecast how many people there would be at the gym on any given day. It would do this by taking as input the previous five hours of attendance readings and then predict each hour's attendance for the rest of the day.

During my research and development, I discovered a previous student's attempt at solving my problem, which they referred to as GyMBRo, AKA Gym Monitoring By Robot. Clever name. Every day this bot would post a morning forecast graph on Twitter and, throughout the day, fetch tweets from the Rec Centre and plot those points to the graph. The issue I had with this implementation is that it did not adjust its prediction throughout the day, say if the gym were abnormally busy, which I planned to have mine do via inputting of the five points to make the forecast from. GyMBRo did, however, give me a good comparison for my forecasts and, when I found its source code, an additional 7 years of attendance data (50 000 tuples) to help with model training. Because good LSTM models are known to be very dependent on training with lots of data, I knew this was a big win. Thanks Demetri!

While keeping things concise here on my site, I engineered my labeled tuples to consist of 5 features: the month (1-12), the week (0-52), the day (1-31), the hour (0-23), and the day of the week (0-6). The label was the number of people in the weight room. This label would also be a feature when put back into the model to get the next prediction. I also added ~1000 tuples to my dataset with labels of '0' for then the gym was closed. After feature engineering, I took 80% of the days and saved them for training, 10% for validation, and the remaining 10% for testing. I then used PyTorch to train my LSTM model. I trained 100 epochs for each different variation of hyperparameters and saved the one with the best validation MSE. Then I chose the hyperparameter combination with the best test MSE. I ended up creating a model with a test MSE of 541.6. This is noticeably better than the MSE of my simple day of the week average model, which had a MSE of 752.2 on the same test dataset. For more in-depth background information, methodology, results, and conclusions of my research, find my full report here.

Hyperparameter comparision chart

Hyperparameter Tuning

This chart shows my process of finding the right hyperparameters for training. The column in light green shows the best combination I found. Each other column has a grey value that differs from the best combination. The validation and test mean squared error can be seen for each combination at the bottom colour coded from green (best) to red (worst). Test also has root mean squared error.

Hyperparameter comparision chart

After concluding my research, I learned lots about what it takes to create an AI model, the necessary steps needed to format training and testing data, and how to communicate my findings. Ultimately, I considered my model a success. It preformed better than my previous attempts of simply averaging previous days' data and, often, I found the model to preform better the GyMBRo model. I did not find the model's performance at forecasting at the start of the day to be very accurate, before any attendance observations have been made. I think the model could be improved by adding more features, specifically for upcoming holidays and open/closed hours. As well, perhaps combining two models for a forecast would be beneficial. Using a different architecture to make a morning forecast could counteract the underestimating morning problem.

To run my code on your own, find my repo here and run predict.py in the LSTM folder.

Animated Forecast for April 18, 2023
Animated Forecast for April, 18 2023

Animated Forecast for April 18, 2023

This is an animation of many hourly forecasts for April 18, 2023 throughout the day. The red line shows the prediction made by the model for the remainder of the day based on the 5-hour long input sequence in blue. The green line shows the actual attendance reported by Rec Centre staff. This green line is what the red prediction line should, in theory, match up or be very close to. Additionally, the MSE can also be seen for each prediction at the top. The initial morning performance is noticeably worse than later in the day.

Forecast for March 30 2023 at 11am

Poor Forecasts for March 30, 2023

Here you can see three predictions made for March 30, 2023. In black, my model's forecast made at 5 a.m. In red, my model's forecast made at 11 a.m. In orange, the average attendance for a Thursday in March. Green shows actual attendance observations. This day was abnormally busy, likely due to the end of the term approaching, and the forecasts were poor. The 11 a.m. forecast predicts a busier day overall, but converges on a normal day by closing, which was not correct.

Forecast for March 30 2023 at 11am

Service Registry

Languages and skills: Node.JS, HTTP, MySQL, HTML, CSS, JS, Cloud Servers, Teamwork

For the class project of CS4471: Requirement Analysis in my fourth year, my group was tasked with creating a microservice-based system that would operate in a cloud environment. For this project, we were required to create a service registry that manages service registrations and maintains an inventory of available services, a service consumer (a front-end application) that enables users to discover registered services through a GUI, and multiple services designed as loosely coupled and independently deployable microservices capable of communicate with the registry through HTTP, AMQP, or gRPC. You can view the project description here.

We decided to code our registry with Node.JS, use MySQL for its database, AWS for cloud hosting, and HTTP requests to enable communication between the registry and services. We then had to identify functional, quality, and architecturally significant requirements (ASRs) of the registry and each service to guide our approach in developing the system's architecture. I was assigned to create the architecture diagrams, identify ASRs, and implement the registry and the first two microservices. As part of my implementation, I provided a Node.JS file to help my groupmates easily connect their services to the registry. Find our full final report here and the implementation code here.

Overall system architecture

Overall System Architecture

Our system architecture consisted of three modules: one for the microservices, one for the registry, and one for the registry database. The microservice modules would be able to send heartbeats to the registry every 15 seconds, letting it know the microservice remained available, send register or deregister requests to the registry, and provide a unique function for the user.

For the backend the registry processed those POST requests made by the microservices and ensured the database remained up to date. If it did not receive a heartbeat from a microservice for 35 seconds, it would mark the microservice as unavailable. For the front end, the registry provided a GUI for users as a webpage along with another method to fetch the currently registered microservices. The webpage would use this information to update its list every few seconds.

The final module, the database, was implemented with MySQL and stored all necessary information about each registered microservice, such as the ID, name, URL, status, and timestamp of the last heartbeat.

Context Diagram

Architecture context diagram

Component Diagram

Architecture component-and-connector diagram

A Couple More Diagrams

The first diagram illustrates the context in which the architecture would be deployed.

The second diagram is a component-and-connector diagram, showing how the modules of the architecture interact, which modules provide interfaces, and which require them.

Demo

Finally, here is a quick demo of the implementation of the architecture running locally. I unfortunately only have microservice 1 and 2 on my repository; the other 3 are managed by my groupmates.

The registry's console can be seen below its GUI. You can see the registry and services performing as specified, with registration, deregistration, and heartbeats being sent from service to registry accordingly.

Demo

Finally, here is a quick demo of the implementation of the architecture running locally. I unfortunately only have microservice 1 and 2 on my repository; the other 3 are managed by my groupmates.

The registry's console can be seen below its GUI. You can see the registry and services performing as specified, with registration, deregistration, and heartbeats being sent from service to registry accordingly.


Business Website

Languages and Skills: HTML, CSS, JavaScript, Bootstrap

The final project of CS2033: Multimedia & Communication II in my fourth year required us to individually create a responsive, interactive, informative, and attractive website for a fake business of our choice. As you might have guessed by now, I am an avid record collector, so I chose to make a website for a record store.

Some of the requirements for the site were as follows:

  • Single page
  • Enhanced parallax
  • ScrollFire
  • Day specific messages
  • CSS animation
  • CSS transition
  • Tiled background
  • Back to top button
  • At least 4 products
  • Form modification and validation

I learned a lot from this course and its prerequisite, lessons I have applied to create this site showcasing myself and my projects.

Check out the business' site

About section of business website
Products section of business website

GIS Project

Languages and Skills: Java, UI, Git, Atlassian Suite, Unit Testing, Teamwork

For CS2212: Introduction to Software Engineering in my second year, I participated in my first computer science group project. It was my first experience coding on a team, requiring us to develop a unified idea of a codebase and deal with version control collaboratively. To stay organized, we used the Atlassian suite of software.

For the project, we needed to create a geographic information system (GIS) covering three buildings on campus. The system needed to include premade informational POIs on the map as well as functionality for user-created POIs. We were required to use Java and were encouraged to use NetBeans for its drag-and-drop UI creator. Additionally, we wrote unit tests for the code using JUnit.

GIS project GUI

The Application

This is how our project appeared when a user selected a user-created POI and chose to edit it. You can see on the right all of the POI's data fields that you could change, and below that, the current weather on campus. Additionally, the right panel (not shown) included options to toggle different layers of POIs and a user favorites list.

In the middle-top section, you can see a list of all existing POIs, and below that is information about the selected POI. On the left, you have the map, which you can scroll around to view the entire floor.

GIS project GUI