Monday, April 15, 2024

Returning my Framework 13: A foray into Linux

I have been a Mac user for eight years and for the most part I have been using the same MacBook Pro since 2016. It hasn't been a perfect computer: the keyboard had to be replaced and the keys used to stick (the '-' key is still a little sticky), but it has lasted all this time without much fuss. I supplemented it with a Mac Mini last year because I was a bit frustrated with its slowing down, but I still use my MacBook Pro because the Mac Mini is decidedly not portable.

All this is to say that I am a pretty happy Mac user. But as it is time to replace this ageing laptop, I thought I'd give something new a try. I didn't want to touch Windows, I don't judge those who do, it is just not my cup of tea. So the logical conclusion is Linux. I use Linux everyday – albeit only a cloud VM – so I thought it couldn't be too bad. I have tried using Linux as my primary OS before, just before I switched to Mac. It was a frustrating experience. The Asus laptop I had at the time was high DPI (above 1080p) which caused scaling issues (keep a note of this), I missed double clicking an icon to install something, and it never felt particularly reliable. But I thought surely things have improved, not necessarily in terms of Linux distributions, but in my own technical ability. I should be able to comfortably deal with installing from source, updating packages with apt/dnf etc. 

I decided on the Framework 13 because it seems like a really fantastic idea. Being able to change ports is nice, but the real killer feature is being able to upgrade RAM, disk and CPU, all in a professional, Macbookesque machine. Some other shoutouts are: the screen with a 3:2 aspect ratio which is brilliant. It make the 13" screen feel much bigger because the space is better utilised by webpages and IDEs. The battery life is also very solid and the finger print reader was remarkably easy to setup in Fedora.

Fedora is also incredibly usable, it uses Gnome which has odd default behaviour, but is easy to customise with Gnome extensions. It is also nice to use an OS where all the GNU tools and most other software tools are made. I will also admit Apple are not perfect. It was especially sad to see how impossible it was to use any Apple cloud features on Linux: no iCloud Drive, messages or photos. It really hit home how fenced off the ecosystem is.

However, the honeymoon phase didn't last very long. The first major problem I had was that the screen would occasionally go completely white after opening it from sleep. This is a documented problem and the recommended fix was easy and just worked. I then started to notice that a lot of the apps I was installing looked terrible. They had fuzzy text that looked awful. Unfortunately there's no real workaround for this, the conclusion seems to be that these apps (normally Electron-based) are using an older Wayland version so don't support high DPI (yes, this is still an issue eight years later). These were annoying and made me a little homesick from the polished MacOS experience, but they were liveable. The final straw that broke the proverbial camel's back was when the fan started whirring, CPU shot up to 100% and the laptop was completely frozen. I had to hold down the power button to stop it. At this point I realized that Linux as a daily driver may never be for me. 

I am certain there will be people who exclaim that I just needed to apply this very simple fix, or "why did you use Fedora? everyone's using NixOS", but even if those did fix those problems, I'm sure new ones would emerge. Call me an Apple shill who's scared of the real world, but I like living in the safe Apple orchard (could Apple be a callback to the Garden of Eden?).

Saturday, September 9, 2023

Why Go is actually easier than Python

A gopher

After a number of years developing in Python, I have seen the light and fallen for Go. This isn't the first time I have tried out Go, but it is the first where I have felt it to be easier, cleaner and overall more sane than Python. I do admit a part of this is the euphoria of seeing something new and shiny, and not having used it enough to see its flaws, but from my experience I feel I have come to discover the beauty of Go for myself.

I think this is a really fundamental point because I have read about the wonders of Go a number of times, and nodded along. But until you actually use something in anger, you won't really get it. Nothing here is going be truly original, but hopefully this gives a convergent perspective to the Go lovers from someone who had no intention to love Go.

As I mentioned, this was not my first foray into Go. I have tried learning it a couple of times and even spent a couple of weeks in a Go codebase. But until I tried porting over an existing Python program to Go, I found it frustrating. My previous attempts were frustrating because I jumped to the novel things: Goroutines and channels (which are awesome but I don't recommend starting there). But after playing around with the more familiar, I think I have finally worked out for myself what makes Go so wonderful: stacked simplicity. A few key rules and ideas add up to make a comprehensive grammar. Go's strength is from what it does not include as much as from what it does.

Error handling

Error handling in Go is contentious. Littering if err != nil everywhere feels like a ton of boilerplate that other languages avoid with exceptions. But it has two very important implications:

  1. It is clear where errors occur. This is such a huge improvement over Python (or JavaScript or any other exception-based language) where you have no clue what exceptions a function might throw. This is important because it makes you think about errors up-front, rather than debugging them later.
  2. No default behaviour. The idea of crashing as the default behaviour on error has to be up there with NULL in terms of billion dollar mistakes. Of course, failing fast is better than failing silently, but how many times have you seen a program crash because of a simple mistake that could easily have been caught in development? Go simply doesn't let you ignore errors. If you have a function func foo() (int, error) and you want to use the return value, you are forced to handle the error. Sure, you could just do x, _ := foo(), but that _ is a clear indicator that you are ignoring the error. The compiler erroring on unused variables is another simple rule that enforces this. 

Note: the compiler doesn't error on ignored return value so func subroutine() error doesn't complain if called as foo(). This has been a known issue since 2017 but is actually more complicated than it seems on first thought (e.g. _, _ := fmt.Println("Hello world") – not exactly pretty to a newcomer).

I also love how simple the error type is, it is just an interface with Error() string, no magic here!

No decorators

A lot of Python web frameworks use decorators for routing. But you do not need decorators when you have anonymous functions. The codes looks almost the same, but the lack of the @ makes it a lot clearer what is going on.

Compare:

@app.get("/post")
def post():
...

and

r.GET("/post", func(c *gin.Context) {
...
})

Not only do you lose the @, you also do not need to write out a pointless function name.

No classes

This can at first look like a semantic difference, but structs are really not the same as classes. You do not have inheritance, just interfaces. Again, the language provides you with just enough to avoid spaghetti code. Interfaces are such an elegant way to provide static duck typing. The type system as a whole feels like an aid rather than ceremony.

Summary

Go is not perfect of course and I have noticed through browsing Go codebases that there is often a lot of repetition. For example you often see parameter structs passed into another struct so you end up with duplicated fields. 

So what about Rust? Go is often compared to Rust because of their relatively short histories (2009 and 2015 respectively) and fairly recent usage explosions, but I personally don't find the comparison particularly useful. They are very different languages with completely different goals and philosophies. Put simply, Rust is not trying to be a simple language and Go is not looking to replace C.

Ultimately, for any new project for which you might consider Java, Python or JavaScript and which does not have a specific language requirement, I honestly think Go would be my first choice.

Friday, July 28, 2023

Does Azure DevOps have a Future?

It's not unknown for Microsoft to have duplicate products. Just think of Windows 8, Windows RT, Windows Phone, Teams and Skype, Project and Planner. The list goes on. So the fact that Microsoft has two rather similar code storage and management tools in GitHub and Azure DevOps shouldn't surprise us. But how long can this charade last?


I was curious to find out as it looks like a project I am working on might be moving to Azure DevOps. I did some research and couldn't find much beyond speculation, however one juicy source I found is from the Episode 321 of the Azure Podcast Episode in which they interview Sasha Rosenbaum, a Senior PM from the GitHub team. In the podcast she says (around 10:30-12:00):

"We can’t effectively run two products and have internal competition between two things so we are going to move towards having one in the end.

“GitHub is the future, [it is] much better positioned to accomplish certain things

“If you are in Azure DevOps now, you probably have five years (emphasis mine) that you can safely continue working in Azure DevOps.

If you’re starting out, check out GitHub first because that’s where we’re going to make investments mostly."

This episode is from March 2020 so that five years is now less than two.

This might be terrifying news for any of you currently working in Azure DevOps and whilst I don't imagine the transition will be painless I think it absolutely will and should happen.

But what does what one PM says count for? After all she no longer even works at Microsoft. Well, for one, looking at her LinkedIn biography, she was actually originally on the Microsoft Azure DevOps team so the fact that she moved onto the GitHub team may tell us something. Furthermore, the sentiment is widespread in the Azure DevOps community that slowly this product will be phased out. Try to lookup "GitHub actions to Azure DevOps pipeline" and you'll see which way the wind is blowing.

And despite the evidence, the truth is that if Microsoft want to streamline their offering, which I think is safe to assume, and the choice comes between GitHub and Azure DevOps. Microsoft would have a much harder time trying to carry people over to Azure DevOps. GitHub Enterprise is the future of Azure DevOps.

Thursday, July 27, 2023

Automating InfluxDB and Telegraf with Docker Compose

Automating is never as easy as you imagine.

InfluxDB is the most popular (as of July 2023) time-series database. It is often used alongside telegraf, an agent akin to fluent-bit. Getting these to work together with Docker Compose is fairly easy, but as soon as you want to automate the whole process, it gets painful.

Enter the world of shell arrays.

In my four years of programming, I'd never come across arrays in shell. How lucky I was. If you thought Perl syntax was weird, behold ${!arr[@]}. So why did I stumble across this nightmare? InfluxDB has a concept of buckets, a little bit like namespaces/schemas in PostgreSQL. Not quite another database, but more separated than tables. I wanted to have a number of buckets for different metrics telegraf is collecting:

If you're not familiar with telegraf, the above snippet is a conf file that will take measurements named applog and forward them into the applog bucket, and take cpu, disk, mem and system into metrics.In order for this to work, the buckets need to exist in InfluxDB. This is where the fun begins. Like many Docker images, the Influx image provides some useful environment variables. One useful one is DOCKER_INFLUXDB_INIT_BUCKET which lets you specify a bucket to be created on startup. Unfortunately it only lets you specify one, but alas do not fear, you can also mount a startup script: ./scripts/influx:/docker-entrypoint-initdb.d so you can create your buckets programatically:

# scripts/influx/init.sh
#!/bin/bash
set -e
echo Creating bucket: applog
influx bucket create -n applog
echo Creating bucket: metrics
influx bucket create -n metrics


But that's not very DRY. So what can we use? Ah, of course a loop and array:


# scripts/influx/init.sh
#!/bin/bash
set -e

BUCKETS=(
    'applog'
'metrics'
)
for i in "${!BUCKETS[@]}"
do
echo "$i" Creating bucket: "${BUCKETS[$i]}"
influx bucket create -n "${BUCKETS[$i]}"
done


So much DRYer! Now I can add buckets, to the array. No more copy-and-pasta. But, wait a second. Is this truly DRY? If I add another entry to outputs.conf I also have to remember to update the init.sh.


[[outputs.influxdb_v2]]
urls = ["http://$INFLUX_HOST:8086"]
token = "$INFLUX_TOKEN"
organization = "demo"
namepass = ["measurements"]
bucket = "measurements"


This won't cut it. Thankfully, getting the bucket names from outputs.conf isn't too bad:

We first tell the script where the config file is (by default bash file locations are relative to where the script was called, not the script itself). We then have to export this so that it can be accessed in our Docker container. Since we're exporting a variable, we need to source (not run) this script i.e. . compose_init.sh.

Another fun aside here: you probably, habitually add set -e to the top of your shell scripts, don't do that here because it will kill your terminal on any failure (since we sourced compose_init.sh).

We'll now change init.sh to use the BUCKETS environment variable, but remember BUCKETS is no longer an array (it's just a string as that's what grep outputs) so we need do some really obvious and intuitive stuff:

See ShellCheck: SC2207 if you want to understand mapfile.

Why didn't you just keep BUCKETS as an array in compose_init.sh? Simple answer: Docker doesn't deal with array environment variables very well.

And pass BUCKETS into the Influx service in docker-compose.yml:

And finally, in order to run this: 

. compose_init.sh && docker compose up -d

That's it. Automated bucket creation with telegraf and InfluxDB with Docker Compose.

See the full code gist here.