- read


Alexander De Furia 13

SWISS is a data collection web app for use by teams in the FIRST Robotics Competition enabling real time collection and analysis of robot performance during competition by teams viewing the event.

This has been a passion project of mine along with my friend Nick Kerstens for some time now. He did the amazing front end work, and I designed and implemented the backend. Together we came up with what SWISS should do and we each made it work in our respective domains. This project would not be possible without him.

The data collected by SWISS is used to determine opposing teams’ strengths and weaknesses and suggest a course of action in competing with or against any team or set of teams at the competition. In previous years a team would often be forced to use pen and paper to fill out data on a physical form, then later enter that collected data into a digital medium, usually an excel sheet. The simplification of this whole process was the genesis of this tool. SWISS skips that transitory between the physical and digital, enabling the data to be used in real time. Alongside this it integrates functions from the best scouting spreadsheets found on Chief Delphi to create a robust all in one solution.

SWISS Homepage

Initial Design Choices

Initially there were few requirements for the software partially because we didn’t know what we could actually make the software do with our data. Any scouting system must at the bare minimum be able to do the following for any predefined set of possible inputs:

  • Collect user entry quickly and efficiently.
  • Congregate all data in one centralized table.
  • Provide analysis tools to quantify data.

These requirements are rather simple in concept but require a series of further questions about design of the application;

  • Can it be real time?
  • Should it be real time?
  • Over what medium do we serve the application?
  • Should it support multiple teams?
  • How do we store the data?
  • Is it even centralized?

So on and so forth. As this was our first major software project there was little to no initial intuition as to how these questions could be answered on a technical level. This required research on a variety of technologies as possible answers to these questions. Initially we went over each of those questions to expand our list of absolute requirements. The above list was expanded to include:

  • Cross platform support between Desktop, Android, iOS.

The most accessible and widely supported standard between these platforms is straight web access allowing the application to be accessed from any device with a browser. An alternative to this could have been either developing independent native apps for each of these platforms using native development tools or a framework such as ReactJS. Neither of these solutions were appealing as they would require separate development and support in environments that we didn’t have any experience in.

  • Real time data collection and analysis.

We want our team to be able to use the latest data from match to match as there are changing variables throughout the competition day. This requires constant connectivity in some facet and centralization of our data.

Real Time Analysis at a Glance
  • User validation and login.

Being at a competition with hundreds of other people we need to ensure the validity of our data and ultimately keep that data to ourselves as ideally it will give us a competitive advantage.

  • Ability to be served locally over a Personal Area Network (PAN).

A Personal Area Network (PAN) unfortunately became a requirement of the software. At competition there are restrictions put in place that prevent the broadcasting of WIFI networks as they interfere with robots on the field, thus there is no WIFI access at events. This alongside cellular data usage limits prompted an alternative connection. With this the PAN b ecame the preferred solution as it allows for a layer 3 connection that HTTP requests can be made to a local server over.

Bluetooth Network Stack

Technical Choices

The initial major choice that we had to make was how we were going to structure our web application. There are a wide variety of frameworks and philosophies that are implemented across different languages each with their own particular specialty. The Model-View-Controller approach seemed like a perfect fit in this particular application as the whole thing revolves around entering data into models structured around the FRC game for each year, and consistent team and event structure. Alternatives were initially considered but due to the data driven nature of this application it was straightforward to take this design philosophy and run with it.

The choice eventually came down to three options, designing a custom Node.js application, implementing the Node.js framework Express, or implementing the python powered web framework Django. The choice was made to use Django due to my familiarity with python development, the simple scalability, and the excellent model control it provides compared to express. We could have very well used Express or created our own solution, but the time and effort investment would not have been worthwhile compared to the outcome, which ultimately would have been a less feature rich version of Django just in JavaScript.

Match Data Input

I had no idea about databases when we started this project and realistically didn’t do enough research compared to the research done during the rest of the design phase. This resulted in the patently incorrect decision to use SQLite3 as our DBMS. This severely limited scalability, as well as features in both schema design and control. After several months of sporadic development, the executive decision to scrap the existing database for a PostgreSQL database was made with no regrets. Initially the simplicity of SQLite3 was attractive but ultimately was far too limiting for our purposes. PostgreSQL on the other hand is feature rich and more scalable.

The most significant challenge that we faced was serving everything over a PAN. This involved setting up layer 7 over layer 2 of the OSI model, where there was very little to no public documentation on doing this. The best resource I could find to setup this network effectively was through stack overflow; an explanation of configuring a new network configuration for the pan, creating bt-agent and bt-network services that use bluez-tools to setup the access point and provide a PAN. This is not a great solution to be perfectly honest, but it is the best we could come up with given the constraints of competition (no Wi-Fi, portability, use of unknown devices). The Raspberry Pi 4B BLE supports ~7 reliable connections which is perfect for our purposes. Worst case scenario we can configure two Raspberry PIs to access as access points and have one handle SWISS, the other act as simply a network bridge. The throughput on a Bluetooth connection could prove to be an issue but that is something we decided to handle later. This is presently the last thing to be fully implemented and tested for the 2022 release.

Bluez Protocol Stack

One of the issues that I faced while actually developing the application was sorting out what data each model contained and how they should each relate to each other. My knowledge of databases was initially none when starting, so there was lots of on the fly learning. The data was eventually broken down into a couple models; Image model for team robot images; Teams for each team that is tracked; Events for each physical event; Match for the actual scouted data that gets input; Pits for the robot data that gets scouted by asking teams about their robot; Schedule to keep track of what teams are at each event. These models form the core functionality of SWISS and only require the actual game piece fields of the Match model to be updated annually otherwise they are abstract enough to use year to year without issue. The only model that is not entirely straightforward is the Schedule model. It is used to both list teams at a particular event and keep track of the actual schedule of matches at a competition. This is done by adding a placeholder field to the model and creating dummy matches that contain every team at a competition according to the FIRST API (All data about event and team existence is taken from this FIRST API and users can import by team, event, district or all by using a simple front end form).

Raw Data From the DB

One of the interesting features that came about late into development was the concept of data ownership. This came about with the finalization that this tool would be widely available at swiss-scouting.ca to any team interested. Prior to this decision the intent was to have this be an internal tool, that was before we realized just how useful SWISS was. Different teams only want access to their own data to ensure their data isn’t tainted and a competitive edge is maintained. We don’t want unnecessary data duplication and easy differentiation between teams data. This led to model ownership of Images, Match, and Pit models. A field was added describing which team created that piece of data and the middleware was updated to prevent a user from requesting data that they didn’t own. We didn’t implement this data ownership to all models as the teams and event data is not alterable by teams and each team will use the same data. This whole implementation led to the creation of TeamMember and TeamSettings models allowing us to easily determine a new models ownership status and the particular settings of a team like the current event that they are attending. This system enabled the use of SWISS by multiple teams simultaneously and independently.

Team Settings


This was a learning process. There were some significant stumbling blocks but ultimately all our issues got sorted leading to a stable, secure, and fast experience for both users and developers. While this is still a work in progress, the first major versions of the deployment system were seriously flawed in different ways but nonetheless effective. These were changed slowly over time simple to fit the needs of the moment and there was no long-term planning put into any of the server side tools until recently.

You’ll have to forgive my foggy memory of the specifics as this was several years ago now. The initial version was so incredibly awful as a result of the server environment that was allowed by Namecheap hosting. The only way to spin up a server was to have an app that a striped down cPanel would install and manage in the most limiting way possible it seemed. This led to the workaround of creating a dummy Django app that served no purpose and using a bash script to clone our git repository and overwrite everything that the dummy app generated. Serving the static and media files proved to be a massive issue on this server as it refused to take our configuration that should have worked for both our dev environment and our limited production environment causing the bash script mentioned above to grow in scope and complexity to a point where it was managing the whole project (seriously flawed). This along with the constant connection issues prompted the change to DigitalOcean hosting.

Digital Ocean Project Panel

DigitalOcean has been amazing. All the issues when using DigitalOcean were a result of my actions whereas Namecheap it was a 50/50 split which was just unacceptable. This guide has been the best resource I could have asked for and I highly recommend to anyone interested in doing anything similar to this using DigitalOcean or not.

During our development we needed something that was quick to update and reflected the latest working version of our source code so we based the whole process on pulling from git in an update script. This update script is rather simple under 10 lines and consists of pulling changes from git, updating dependencies, making Django migrations, applying those migrations and restarting nginx and gunicorn. In order to make this update functionality easily accessible to other developers who like to avoid ssh, I tied it into a simple flask server that calls the bash script when a GET request is received at a particular subdomain. Is this entirely secure and resilient to malicious actors? Absolutely not, and it is not a good solution for a critical application. However with this in mind I did not see use being the subject of a cyber attack during early development which I am happy to say was correct.

So at this point we have an update service but we haven’t really discussed the stack of technologies being used in SWISS. Our fundamental application is of course based on the Django framework we utilize Gunicorn as our Web Server Gateway Interface or WSGI to run the Django app FRC-Scouting. Gunicorn will invoke the app and create an appropriate number of workers to handle requests to the app. We use Nginx as a reverse proxy to direct requests to Gunicorn and static files. As of now our static and media files are held locally as there is no need for any machine other than the singular server instance to have quick access to these files. Lastly the data of SWISS is handled by a the PostgreSQL server hosted by DigitalOcean.

Technical Future

Ideally in the next month this whole stack will change from a single server instance which is simply not scalable into a Kubernetes cluster allowing for simple horizontal scaling. The first steps for this have already been taken, the creation of a Docker image of SWISS has been completed and automated build workflow for it has already been created and is presently functional. To be perfectly honest my understanding of the specifics of how to implement this is limited but the idea seems straight forward enough so please do not take the rest of this paragraph as fact. Kubernetes will spin up and manage a dynamic number of docker containers based on the SWISS image, based on the requests made to the cluster. The cluster will include a load balancer to distribute requests appropriately between nodes to ensure that each request is processed as quickly as possible even if another has stalled. It will also allow for a downtime free update by spinning up nodes with the update and then spinning down outdated versions.

One of the requirements of this system is to have a central data storage solution. I already have that with the PostgreSQL server hosted by DigitalOcean as it can process many simultaneous requests. The issue is the dynamic media files handled by SWISS are presently stored locally. This will require the transition to a network attached storage solution that is accessible by all the nodes, as well as from a single development instance so that we are not dealing with multiple environments and testing criteria. Implementing this must happen before the transition to using a Kubernetes cluster for production as those would rely on this NAS. At the moment it seems as though the best solution to this would be using a S3 Bucket for all static and media files but that is not set in stone and requires a bit more exploration of the specifics of uploading files dynamically. In any case the application should not see the files any differently than now, the database will hold a reference to a team’s image and the server can retrieve it whether it be from the local disk or a S3 Bucket and send it off to the client or potentially just send the reference to image to the client. As I said the specifics are not nailed down.

This part of the article is just helping me set my plans straight. Past what is written here I don’t think there will be any more major changes to how the application works. Upgrading from year to year in this system will be interesting. If everything works together the way I am intending it should be even easier than the last upgrade from 2020/21 to 2022 as things are more automated now, or it could all break as things are more automated now… It would have really helped having some relevant formal training and not being stuck in the middle of a University Computer Science degree.

Future of SWISS

Ideally the current version of this application will work for my own teams in their first upcoming events of the 2022 season happening simultaneously in early March and then with that we can look to expand it to other teams in FRC. This will require the scaling that Kubernetes allows so as soon as it can be implemented and tested then I’ll be trying to get more and more teams using this software. One of the questions that I’ve had from people is “Why give this software away for free to other teams?”. It’s being given away free with the source code because I think that given being in the program myself and having had immense benefit from it myself, I realize that it wouldn’t have been possible if developers charged for everything that made the experience better or if teams kept their new tools private. This is my way of giving back to the FIRST community in my small way. We might have to charge a small fee to pay for the web hosting of the project depending on how many people decide to adopt it, but it will always be available free in both the source code and an installable .deb file.