Image Dithering

I showed you a lot of elementary stuff regarding image processing lately, so now it’s time to do something nice. What I will show you is called image dithering. But first let me give you a motivation for it

Let’s say you have a grayscale picture and you want it to paint with really only black and white. Therefore I will demonstrate you, what happens, if you just “round” the pixel values to black and white. For this purpose I will use a lot of functions I implemented in the previous posts. At first we have to load the picture.

library(Raspository)
imgColor <- imageRGBFromJpeg("Wagner.jpg")
plot(imgColor)
plot of chunk unnamed-chunk-1
A picture from the Nibelungen Halle in Königswinter of a monument to Richard Wagner, who had a neckbeard, before the invention of the internet.

Next step is, that we have to convert it to grayscale. Remember the function I implemented in this post?

img <- imageBWFromRGB(imgColor, c(0.3, 0.59, 0.11))
plot(img)
The same picture as a gray scale image.

Our eyes don’t treat all the colors equally, that’s why I choose this proportions for the colors to the grayscale. I took the values from this post on tutorialspoint.

Now let’s just quick an dirty convert this to completely black and white.

imgTmp <- img
imgTmp@current <- round(imgTmp@current)
plot(imgTmp)
The same picture as a with gray scales either rounded to black or white.

Wow! We certainly lost basically all the information in the picture.

Implementing Image Dithering

Before I show you how to implement a version of dithering I will shortly explain you the idea behind it. Let me ask you one question… Is there really such a thing as gray? Or how exactly would you define gray?
Quite simply as a mixture of black and white. Now think about colors and the screen, you’re probably sitting in front of. You have basically only three colors (RGB!). And each of the pixels on your screen consists of three sub-pixels, one of each of those three colors.
You perceive them not as individual dots, because their too close together for your eyes to distinguish them. Now’s the question, if we could something similar in this black and white case? And of course: Yes, we can! And it is called image dithering, which is by the way also applicable to colored images.

The idea now is that you iterate over all of your pixels ans apply to each of them still some kind of round function. But then you propagate the difference of the original pixel and the rounded pixels to its neighbors, that are still to be processed.
But of course there are also different methods in doing so. I’ll show you to of them today.

Floyd Steinberg dithering

Let’s begin with the Floyd Steinberg Algorithm. I suggest you to read the corresponding Wikipedia article, as it is very straightforward.

And my implementation is also pretty straightforward.

floydSteinbergDithering <- function(img, transformPaletteFunction = round){
    pixel <- img@current

    n <- dim(pixel)[1]
    m <- dim(pixel)[2]

    for(y in seq(m)){
        for(x in seq(n)){

            oldPixel <- pixel[x,y]
            newPixel <- transformPaletteFunction(oldPixel)

            error <- oldPixel - newPixel

            pixel[x,y] <- newPixel


                if(x < n){
                    pixel[x + 1, y] <- pixel[x + 1, y] + error * 7/16
                }

                if(x > 1 && y < m){
                    pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 3/16
                }

                if(y < m){
                    pixel[x, y + 1] <- pixel[x, y + 1] + error * 5/16
                }

                if(x < n && y < m){
                    pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 1/16
                }

        }
    }

    ditheredImage <- new(class(img)[[1]], original = img@original,
                         current = pixel, operations = img@operations)

    return(cropPixels(ditheredImage))
}

For the future some kind of Kernel function would be nice to be able to apply different kernels to pictures. But now let’s test it.

imgFS <- floydSteinbergDithering(img)
plot(imgFS)
The same picture with the Floyd-Steinberg algorithm applied to it.

That’s awesome! It almost looks like we had different gray values in our pictures. And there are just some minor artifacts introduced by it, meaning some appearent structures, that aren’t actually present in the original.

Now let’s try another method, which has a larger kernel.

Minimized Average Error Dithering

This dithering method was introduced by Jarvis et al from the famous Bell Lab in 1976. So you see that this whole field is pretty old. And some of you might remember a time, where it was actually difficult to transmit data from one location to another. I still remember being a six year old child waiting minutes on the NASA homepage to load one picture of a stellar nebular. Today image compression is of course still important for things like dynamic homepages, especially if they are mobile friendly.

OK, now let’s come to the actual method. It is called minimized average error. And again the Wikipedia article on it is pretty good.

This time the neighborhood of your pixel is increased by a range of one. Let me show you the implementation…

minimizedAverageErrorDithering <- function(img, transformPaletteFunction = round){
    pixel <- img@current

    n <- dim(pixel)[1]
    m <- dim(pixel)[2]

    for(y in seq(m)){
        for(x in seq(n)){

            oldPixel <- pixel[x,y]
            newPixel <- transformPaletteFunction(oldPixel)

            error <- oldPixel - newPixel

            pixel[x,y] <- newPixel

                if(x < n){
                    pixel[x + 1, y    ] <- pixel[x + 1, y    ] + error * 7/48
                }
                if(x < n - 1){
                    pixel[x + 2, y    ] <- pixel[x + 2, y    ] + error * 5/48
                }

                if(x > 2 && y < m){
                    pixel[x - 2, y + 1] <- pixel[x - 2, y + 1] + error * 3/48
                }

                if(x > 1 && y < m){
                    pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 5/48
                }

                if(y < m){
                    pixel[x    , y + 1] <- pixel[x    , y + 1] + error * 7/48
                }

                if(x < n && y < m){
                    pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 5/48
                }

                if(x < n - 1 && y < m){
                    pixel[x + 2, y + 1] <- pixel[x + 2, y + 1] + error * 3/48
                }

               if(x > 2 && y < m - 1){
                    pixel[x - 2, y + 2] <- pixel[x - 2, y + 2] + error * 1/48
                }

                if(x > 1 && y < m - 1){
                    pixel[x - 1, y + 2] <- pixel[x - 1, y + 2] + error * 3/48
                }

                if(y < m - 1){
                    pixel[x    , y + 2] <- pixel[x    , y + 2] + error * 5/48
                }

                if(x < n && y < m - 1){
                    pixel[x + 1, y + 2] <- pixel[x + 1, y + 2] + error * 3/48
                }

                if(x < n - 1 && y < m - 1){
                    pixel[x + 2, y + 2] <- pixel[x + 2, y + 2] + error * 1/48
                }
        }
    }

    ditheredImage <- new(class(img)[[1]], original = img@original,
                         current = pixel, operations = img@operations)

    return(cropPixels(ditheredImage))
}

You wanna see it’s effect, don’t you? Here you go…

imgMea <- minimizedAverageErrorDithering(img)
plot(imgMea)
The same picture with Minimized Average Error applied to it.

Do you see the difference? I think we got rid of the artifacts! Isn’t that amazing? I really love how demonstrative image processing is.
But that’s it for today… See you soon!

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

Please follow and like us:

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *