Learning pillow

Recently I was playing with date formatting in python and wrote a little script which takes a json object full of upcoming dates, then shows a count down.

Running this in the terminal is easy, but I wanted a simple way to see the information without having to do that. I could have written a macOS app to add to the UI at some point, but swift is still a mess and I’m not going to go learn ObjectiveC just for this one. I had used PIL (more specifically pillow, the working rewrite) in the past through some other work, but figured it would be handy to learn it.

There’s plenty of apps which will overlay arbitrary text on an image - but what’s a programmer to do but reinvent the wheel every time? :)

TL;DR: the code is up on GitHub at yaleman/pybackground.

The basic workflow is this:

  • Load the image
  • Resize it to the output size
  • Calculate the text colour based on the background that’s behind it
    • Find the bounding box of the text
    • Calculate the average background colour for that region
    • Find the complimentary colour for that average
  • Overlay the text
  • Output the image

There’s a fair bit of standard code, things handling command line options and imports, so I’ll cut to the first fun bit - the initial load of the image.

baseimage = Image.open(args.sourcefile).convert('RGBA')

Fairly simple, but the conversion is important for the rest of the process; when you go to composite the layers at the end, they both need to be in the same colour space.

Next up, complimentary colours. This came about because originally I was using a single known image as the background, but I wanted to use any arbitrary image in my wallpapers folder.

Here I do the bounding box calculation. Python list comprehensions are amazing for this, allowing me to avoid the classic for loop and just process it all inline.

textbox_x = max([drawobj.textsize(line, fontobject)[0] for line in textlines]) + (FONT_SIZE / 3)

For those playing at home, “for each line, calculate the size of the text, but only grab the first element of the return value (x coordinate), then find the maximum value of the list.” Oh, and add a little for the border.

Doing this in another language, or without understanding list comprehension would be a mess… horrible pseudo-code incoming!

$max=0
for($i=0;$i<count(textlines);$i++) {
    $tmp = drawtext($textlines[$i]);
    if( $tmp[0] > $max ) {
        $max = $tmp[0]
        }
    }

Ew, right?

The next step is to grab the bounding box and work out the average colour.

cropbox = baseimage.crop((int(x0), int(y0), int(x1), int(y1)))
cropbox = cropbox.resize((1, 1), resample=Image.ANTIALIAS).resize((100, 100))
averagecolour = cropbox.getpixel((0, 0))

All fairly self-evident - Python is just pseudocode with indents after all. Crop out a new image object based on the bounding-box of the text. Resize it to a single pixel, which is a quick way to get an average, assuming you understand how your resizing algorithms work. I didn’t, but I learned! I then resized it to 100x100 for debugging, which is much easier to see than a single pixel. The last step here is to grab the value of the average.

averagecolour = [colour/255.0 for colour in averagecolour[:3]]
hls = colorsys.rgb_to_hls(*averagecolour)
newhls = (abs(hls[0]-1.0), abs(1.0-hls[1]), hls[2])
newrgb = colorsys.hls_to_rgb(*newhls)
newrgb = (int(255*newrgb[0]), int(255*newrgb[1]), int(255*newrgb[2]))

Complimentary colour calculations were quite the hassle until I figured out that colorsys uses 0.0-1.0 and pillow uses 0-255 for their values. I did the usual searches on the ‘net and found what I needed theory-wise, then applied it.

In short, convert the input to HSL (Hue, Saturation, Lightness) - though colorsys uses HLS. Next, change the Hue value to that of the Hue opposite - e.g., if your Hue is 50°, the opposite one will be at 230° on the wheel — 180° further around. I also found flipping Lightness helped with visibility in my testing, given that dark text shows better on light and so forth.

To be honest, the rest of the code is fairly simple. Overlay the text starting with the last line (woo, negative slice increments!) and output the file.

All up, it took me about four hours to learn the basics of colour representations, image resizing and text layout within pillow then apply it in a handy little script which is re-usable in future. Woo!


Tags: python