I finished a moderately-sized project last week; since I want to blog more I figured I'd talk about it.
I've blogged about my karting hobby before, but anyone halfway quick in public karting knows the biggest factor in how fast you are are the the karts themselves: maybe some rando crashed a good kart and made it bad; maybe a bad kart got worked on and made it good; maybe a good kart got worked on and made it bad - you never know until you show up.
Most of the regulars at my local track know each other and we're generally open about our recent experiences with karts - which ones seem faster, which ones seem slower, which ones have developed weird handling characteristics - but we don't run all the karts and we don't always take note of the karts we do run, so some can slip through the cracks.
I was idly chatting last year about how nice it would be if we could pull race data to discern which karts are fast when a fellow racer told me he'd come up with a script to do exactly that. It sounded more than a little clunky - I'm sure the fact that software for karting facilities doesn't exactly follow modern best practices didn't help - but I didn't have to write any code so if it was working I was going to use it.
Moving forward to the middle of this past April, I had more free time on my hands than I could spend searching Twitter for shitposts and I wanted to get some more experience with writing async Python code, so I decided to write my own version, throwing in a couple other things I'd been wanting to try as well.
I had read about 12 factor apps before but never built one from scratch, so I figured I'd start from that paradigm.  Since I knew I wouldn't be maintaining this much after I got it stable in production I knew I couldn't use a standalone db server, so I chose to use SQLite instead. 
After creating a schema I could live with I started working on tooling to get an idea of what code was necessary for this project; at the time, I thought my core loop would be similar to the script I had been told about - at some point each day I would get all the races of certain people whose history I wanted that weren't already in my database. Thankfully, after finishing the tooling I looked into how the track made its realtime race data available and realized I could ingest that directly - meaning I was essentially guaranteed to capture data from every race and not just the ones related to people I'd already seen.
I lost my momentum for a while after getting the backend running - I was dreading having to write the tests, plus PyCon was coming up and I focused on preparing for my first big trip since the world shut down; it took a couple weeks after I returned before I could force myself into working on the frontend (a very simple Flask app and my introduction to Bootstrap). Once I got back into the software engineer groove writing the unit tests was still a slog, but one I could deal with.
What I could not deal with, however, was mypy. When I'm working on personal projects I don't particularly mind using tools with a more...theoretical benefit if I think they can help me become a better software engineer, and mypy is the prime example of such a tool. "It's Python with some semblance of type safety, what could go wrong?" I said to myself as I downloaded the tool from pip. "You typed another project in the past and it wasn't that terrible, was it?" I said to myself as I ran the tool on my new repo for the first time. After two calendar days which included multiple type cheats, moving from embedded annotations to type stubs and back as well as a death march from ~11AM to 5AM, I managed to make mypy shut up. I wish I could say the mypy-influenced changes were improvements, but their only benefit was to shut the tool up. Nevertheless, I manaed to defeat the infernal tool and move on.
Once everything Worked On My Machine (TM), all that remained was the "simple", yet tedious stuff I only do because people pay me to do it - setting the GitHub repo up correctly, configuring code coverage and figuring out how I'm going to get what I've written to run in production. Since Python packaging has been in a bit of flux for the past few years I like to attend at least one packaging talk every time I attend PyCon to find out the state of the art; one talk mentioned the Scikit-HEP guidelines, which seemed neat enough for me to steal for this project's layout. I normally use Codecov for code coverage of my projects but called an audible this time to give Code Climate a shot.  After getting Code Climate running I set up my preferred collection of style and linting tools in a pre-commit hook and created the CI pipeline. Since I was feeling frisky, I also created a test CD pipeline that publishes to TestPyPI.
After I had all my automation in place it was time to determine how I was going to run this. I had been looking at Fly for a while but never had any reason to use it before, so when I pulled the trigger on this project I decided I would also give Fly a shot. Because I can never make things easy for myself, deploying on Fly meant I would have to learn Docker and figure out how to write a Dockerfile, but I had been meaning to get my feet wet with Docker anyway so I was in a multiple-birds-one-stone situation. Creating a working Dockerfile took far less time than getting my CI pipeline working, so after creating my Fly account , I created a small volume to store the database, hit deploy and - my app immediately fell over.
But why? It worked locally in Docker, whose whole purpose is to avoid Works On My Machine (TM) shenanigans, so how could there be issues when it was deployed to production? I asked in the community forums and got the expected amount of help you normally get in community forums (zero), so I continued to dig into the problem on my own. I never managed to find the setting I needed to change to make everything work the way I wanted, but I found a workaround that was sufficient for my needs and my project was live, pulling and displaying data just like it should.
I pushed my last update early on a Tuesday. When I woke up that morning to check if the app had fallen over in my sleep, the logs were full of errors from failing to connect to the live data source. This had never happened in the weeks I had been building the app so I didn't know what to think, but I left it running to see if the issue would solve itself. The issue persisted into Wednesday, so I decided to take a trip up to the track to see what was going in. This is when I'd heard the news - the company that owns the track decided to move away from their previous software provider, breaking my app. Like the movie, the passion of the code is derived from pathos, meaning "suffering".
As I write this post the replacement system is wet garbage, but will hopefully improve quickly - with luck we will be able to get timesheets of our sessions this week, and if live race timing isn't available before the next league race the regulars may riot. I hope when live timing comes back my app can access that data, letting me finish my last bit of beta testing and work on my plans for observability, but only time will tell.
|||The biggest influence the 12 factor paradigm had on my app was making the db path an environment variable, but I guess you could include starting with admin tooling as well|
|||One additional bonus from using SQLite was that I didn't have to pull in a third-party ORM, cutting down on direct and transitive dependencies|
|||tl;dr don't use Code Climate, aside from the fact that it can't handle merging results from multiple test runs it also can't handle when Python packages are installed in a non-editable mode|
|||Which thought my Privacy card was a prepaid card and wouldn't let me use it for billing; I had to buy service credits I will never use since I want to keep this app in Fly's free tier|