Browsed by
Category: Blog

Stuff I Can’t Afford

Stuff I Can’t Afford

himekanvas / Himeka
Aw, man… Why did HIMEKA have to come out with a new album right now?

himekanvas [w/ DVD, Limited Edition]

If I calculate that up, it’s nearly $60 with shipping to where I am in Canada. Maybe I can convince someone to get it for me as a Christmas present? Who knows 🙂

(Why yes, I am blatantly including an affiliate link for myself in this post. Why not?)

Update: I decided that I could actually afford it after all. And the Limited Edition was still available. It’s in the mail now…

Expensive SD cards: Worth it.

Expensive SD cards: Worth it.

So, I just bought a cheap 16GB SD card today. My original plan was to use it as a replacement to the failed flash drive in an old EeePC laptop. (As it turns out, the SD card reader in said netbook is broken, so the plan failed.) But I got curious; what benefit do you get by paying more for an SD card? I have two to try today: An ADATA class 10 16GB card, and a rather older Lexar Professional ×133 2GB card (it predates the ‘class’ designations).

So, let’s pull out the benchmarks. First the cheap ADATA card:

16GB AData Class 10

Hmm. The read speed (blue line) is fine, it manages to saturate the USB 2.0 bus when reading. But that write speed is pitifully slow. Given that the raw images from my camera hit nearly 8MB, that means that it would take nearly 2 seconds to write a single image to the card. Maybe it’s a good thing that the EeePC’s SD card reader failed—using this as a system disk would not have been fun.

Now lets take a look at the rather more expensive (and older) Lexar Professional ×133 card:

2GB Lexar Professional 133x

Now that’s more like it. Again, the read speed saturates the USB 2.0 bus that my card reader is on—but this time, the write speed manages to nearly nearly keep up! That’s an average write speed of over 16MB/s, writing a raw image from my camera in half a second. (Conveniently, my camera tops out at 2 images per second.) The green dots representing read latencies also have fewer outliers, note the difference in scale between the two images.

This line of SD cards (still called the same “Professional ×133”) has been updated with larger capacities now, and they claim that their current cards have a minimum write speed of 20MB/s. I believe them. And if you want a faster card, paying more for the higher-end models is certainly worth it.

But I would need a faster SD card reader if I wanted to test them properly…

Vala Bindings for libmusicbrainz4

Vala Bindings for libmusicbrainz4

When developing Riker, I had a bit of a choice – I could either write (from scratch) a new library to interface with the MusicBrainz XML webservice, or I could create bindings to access the existing libmusicbrainz library from within Vala. Up to today, I’ve gone a little ways down both paths, and both have problems.

If I write new bindings from scratch, they’ll have some nice features like integrating into the Glib main loop, and automatically determining proxy settings from the environment. But it will be a lot of coding; and even more debugging.

The existing libmusicbrainz code is better tested, and writing bindings is less overall code to write. Unfortunately, I’m writing Vala bindings for the C bindings to a C++ library. The extra steps cause some weirdness, which means that the bindings are more complicated than I would like.

And then there are a few things with the C bindings to libmusicbrainz that it simply gets wrong. For example, it has no working typechecking! As a result, even some of the internal test code gets types mixed up, causing hard to debug issues. I’m working on a patch to correct this, which will change the C bindings API slightly. (But curiously enough, not the ABI).

But in the end, simply to get started faster, I decided that the bindings are the way to go. The hypothetical GObject-based MusicBrainz webservice library will have to wait for another day.

Take a look at my progress so far on the vala bindings at libmusicbrainz4.vapi.

Mercurial Frustrates Me

Mercurial Frustrates Me

Maybe I’m just used to having too much power at my fingertips. Git was designed, from the ground up, to provide operations to do absolutely anything to a repository, right down to the most basic level of manually creating individual objects in the repository. Using user-visible command-line tools that can be operated from scripts. It is literally possibly to reimplement most of the user visible Git commands (such as “commit”!) using shell scripts and some of the more-basic Git “plumbing” commands.

So, there’s no surprise that Git commands support special script-friendly output modes. A good example: git log --numstat will print out a summary of the changes in a commit in a parser-friendly tab-delimited format. Mercurial doesn’t have this option. (Someone wrote a patch to add it, but it was rejected!)

And then there’s the issues with speed. I’m writing a script that generates a summary of the differences between two branches based on which commits have been merged into each branch.

$ time git log --pretty=oneline --numstat brancha..branchb
real 0m1.081s

So, about one second. Not bad for a command that’s summarizing around 400 commits from a 250 MiB repository. Lets see if Mercurial can keep up:

$ time hg log -r "ancestors('branchb') - ancestors('brancha')" --template "{node} {desc|firstline}\n" --stat
real 3m51.994s

Ok… That command took over 230× as long in Mercurial as Git. Amusingly, the same repository in Mercurial is around 500 MiB – twice the size! (For the record, it’s mainly the --stat that slows it down. If I remove the stat, Git takes 0.167s, and Mercurial 0.421s). This is completely unusable.

So, why is Mercurial around 200× slower to calculate the differences contained in a commit? I don’t know. But they really should fix it.