Tag Archives: Dithering

Dithering can be a process, that can be used to transform a grey-scale image to a black and white one. But audio processing also uses dithering.
In general it is a way of using noise to discretize continous values. For this purpose it is useful for compressing data. But you can also use it as a stylistic filter.
If you want your images to look like from an old newspaper to dither them is a good idea.
And with images of high resultion it can even be hard to distinguish the original from the dithered one.

So let me show you some methods doing so.

Lattice Boltzmann Dithering

Revisiting Dithering

It’s been a while since my last post and an even longer while since the last one about image processing. However I’ll just catch up, where I left off. The last post in this category was about image dithering. A common usage of dithering is the conversion of a grey scale picture to a picture consisting just out of black and white pixels, saving space and also useful for some print media. On a further note dithered images have a nice visual style, I would call something like dollar bill style.

Conventional Dithering methods however also introduce artifacts to pictures and of course they also lose sharpness. So there’s the question, if one could approve on those conventional dithering methods in order to reduce artifacts and increase sharpness.
If you’re mathematically minded, you might also be worried, that conventional dithering is not rotationally invariant, which I assume is due to the fact that the dithering kernel is not symmetric and the way one iterates over the picture. Rotationally invariant basically means, that the output of a method doesn’t change, when you rotate your input image, which is of course a desirable trait.

So what about taking some inspiration from physics? After all physical laws of motion are rotationally invariant. Lattice Boltzmann methods are a way of computing dynamics in fluids. And they have been successfully employed for dithering by Hagenburg et al (10.1007/978-3-642-10520-3_91). The paper regarding this research also served as my main source for the method and its implementation, even though my specific implementation might differ in some details.
I’ll call the method Lattice Boltzmann Dithering during the rest of the article, as I didn’t find any other handy name for it.

The Idea Behind Lattice Boltzmann Dithering

The idea behind the method is to model the picture like a fluid with particles in it. In this case the grey values are the particles. A pixel that is black (with a grey value of 0) has so to say no particles in it. Those particles dissipate in the fluid over time. The time is modelled as discrete steps. They dissipate according to the following laws during each time step:

  • If a pixel has a value v greater than 1, where 1 is the maximal grey value, (v – 1) values are distributed amongst the neighbors of the pixel and the value of original pixel becomes 1.
  • If a pixel’s value v is lower than a minimal treshold, the user can set, v is distributed over its neighbors and the pixel itself becomes 0.
  • Otherwise a fraction of the pixels value is distributed to each neighbor, that has a larger value, while being smaller than 1. This fraction is subtracted from the pixel itself.

I hope you see, how with this laws pixels with a high value (many particles) attract more values. After some time steps you should more or less only be left with black and white pixels this way.

And the whole process is stopped, when the difference from one time step to the next one becomes sufficiently small. And that’s about it… Not really that complicated, if you ask me! 🙂
Of course the paper goes more into detail about the theoretical background and proves some things about the method. If you’re interested, you should absolutely read it.

Implementation of Lattice Boltzmann Dithering

And now let’s come to the part you all waited for… The implementation. First I’ll implement a method for the dissipation of the values from each pixel, which will e executed in each time step to get the next distribution of values.

One thing to keep in mind, when working with images is, that they have borders. And as you don’t want to dissipate the values across borders or worse access a part of the image, that isn’t there. So you have to treat the pixels at the borders differently. In the case of this method, I always check first, if a neighboring pixel is there or not.

dissipatePixel <- function(img, minimalTreshold = 0.01){


    imgAtNewStep <- matrix(c(0), ncol = ncol(img), nrow = nrow(img))

    for(i in seq(nrow(img))){
        for(j in seq(ncol(img))){
            if(img[i,j] > 1 || img[i,j] < minimalTreshold){

                toDissipate <- img[i,j] 

                if(img[i,j] > 1){
                    toDissipate <- toDissipate - 1
                }

                dissipated <- 0

                if( i > 1){
                    imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if( j > 1){
                    imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if(i < nrow(img)){
                    imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if(j < ncol(img)){
                    imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if( i > 1 && j > 1){
                    imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                if( i > 1 && j < ncol(img)){
                    imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                if( i < nrow(img) && j > 1){
                    imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                if( i < nrow(img) && j > ncol(img)){
                    imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                ## add the non dissipated amount to the same pixel in next time-step
                imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
            }else{

                dissipated <- 0
                currentPixel <- img[i,j]

                if( i > 1 && img[i - 1,j] > img[i,j] && img[i - 1,j] < 1){
                    imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if( j > 1 && img[i,j - 1] > img[i,j] && img[i,j - 1] < 1){
                    imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if(i < nrow(img) && img[i + 1,j] > img[i,j] && img[i + 1,j] < 1){
                    imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if(j < ncol(img) && img[i,j + 1] > img[i,j] && img[i,j + 1] < 1){
                    imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if( i > 1 && j > 1 && img[i - 1,j - 1] > img[i,j] && img[i - 1,j - 1] < 1){
                    imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                if( i > 1 && j < ncol(img) && img[i - 1,j + 1] > img[i,j] && img[i - 1,j + 1] < 1){
                    imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                if( i < nrow(img) && j > 1 && img[i + 1,j - 1] > img[i,j] && img[i + 1,j - 1] < 1){
                    imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                if( i < nrow(img) && j > ncol(img) && img[i + 1,j + 1] > img[i,j] && img[i + 1,j + 1] < 1){
                    imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                ## add the non dissipated amount to the same pixel in next time-step
                imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
            }
        }
    }



    return(imgAtNewStep)

}

Done that! Now the easy part…
But as the implementation in R with loops is incredibly inefficient I’ll just run 50 time steps this time. I will however implement this method at some later point in C++, where loops aren’t inefficient. This will also serve as a good example on how to integrate C++ code in R.

lbDithering <- function(img, epsilon, minimalTreshold){

    i <- 0
    while(TRUE){

        imgAtNewStep <- dissipatePixel(img = img, minimalTreshold = minimalTreshold)

        #if(norm(imgAtNewStep - img, type = "2") < epsilon){
        if(i >= 50){
            return(imgAtNewStep)
        }else{
            img <- imgAtNewStep
        }
        i <- i +1
    }
}

Usage

Now let’s reap the fruits of our coding and test the method on a picture I’ve selected. 🙂

birb <- imageBWFromJpeg("birb.jpg")
birbDithered <- lbDithering(birb@current, epsilon = 20, minimalTreshold = 0.05)
writeJPEG(round(birbDithered), "birbDithered.jpg")

Let me show you the resulting Jpeg:

A grainy picture of two pigeons lying down, dithered with the Lattice Boltzmann Method.
Image dithered with Lattice Boltzmann Method

Isn’t that beautiful? As comparison the original:

The original picture of two pigeons lying down.
Original image

And a conventionally dithered version:

birbDithered2 <- errorDiffusiondDithering(birb, method = "mea")
writeJPEG(birbDithered2@current, "birbDithered2.jpg")
A grainy picture of two pigeons lying down, dithered with the mea dithering.
Image dithered with minimized average error

You can see more structure in the one created with Lattice Boltzmann Dithering, don’t you? And also you can better understand how light and shadows are distributed.
So overall a pretty nice algorithm I would say! Although I like the dollar bill look of the conventional one as well.

So that’s it for now! Until soon…

Yours, David

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

And well… You might’ve noticed, that I used some of my methods/classes a bit different than the last few times. Having some distance from coding this stuff I noticed I have to change some things about it. Although I don’t know yet, if I’ll make a blog post about it or not. Do you wanna read one?

Please follow and like us:
error0

Bibliography

Image Dithering

I showed you a lot of elementary stuff regarding image processing lately, so now it’s time to do something nice. What I will show you is called image dithering. But first let me give you a motivation for it

Let’s say you have a grayscale picture and you want it to paint with really only black and white. Therefore I will demonstrate you, what happens, if you just “round” the pixel values to black and white. For this purpose I will use a lot of functions I implemented in the previous posts. At first we have to load the picture.

library(Raspository)
imgColor <- imageRGBFromJpeg("Wagner.jpg")
plot(imgColor)
plot of chunk unnamed-chunk-1
A picture from the Nibelungen Halle in Königswinter of a monument to Richard Wagner, who had a neckbeard, before the invention of the internet.

Next step is, that we have to convert it to grayscale. Remember the function I implemented in this post?

img <- imageBWFromRGB(imgColor, c(0.3, 0.59, 0.11))
plot(img)
The same picture as a gray scale image.

Our eyes don’t treat all the colors equally, that’s why I choose this proportions for the colors to the grayscale. I took the values from this post on tutorialspoint.

Now let’s just quick an dirty convert this to completely black and white.

imgTmp <- img
imgTmp@current <- round(imgTmp@current)
plot(imgTmp)
The same picture as a with gray scales either rounded to black or white.

Wow! We certainly lost basically all the information in the picture.

Implementing Image Dithering

Before I show you how to implement a version of dithering I will shortly explain you the idea behind it. Let me ask you one question… Is there really such a thing as gray? Or how exactly would you define gray?
Quite simply as a mixture of black and white. Now think about colors and the screen, you’re probably sitting in front of. You have basically only three colors (RGB!). And each of the pixels on your screen consists of three sub-pixels, one of each of those three colors.
You perceive them not as individual dots, because their too close together for your eyes to distinguish them. Now’s the question, if we could something similar in this black and white case? And of course: Yes, we can! And it is called image dithering, which is by the way also applicable to colored images.

The idea now is that you iterate over all of your pixels ans apply to each of them still some kind of round function. But then you propagate the difference of the original pixel and the rounded pixels to its neighbors, that are still to be processed.
But of course there are also different methods in doing so. I’ll show you to of them today.

Floyd Steinberg dithering

Let’s begin with the Floyd Steinberg Algorithm. I suggest you to read the corresponding Wikipedia article, as it is very straightforward.

And my implementation is also pretty straightforward.

floydSteinbergDithering <- function(img, transformPaletteFunction = round){
    pixel <- img@current

    n <- dim(pixel)[1]
    m <- dim(pixel)[2]

    for(y in seq(m)){
        for(x in seq(n)){

            oldPixel <- pixel[x,y]
            newPixel <- transformPaletteFunction(oldPixel)

            error <- oldPixel - newPixel

            pixel[x,y] <- newPixel


                if(x < n){
                    pixel[x + 1, y] <- pixel[x + 1, y] + error * 7/16
                }

                if(x > 1 && y < m){
                    pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 3/16
                }

                if(y < m){
                    pixel[x, y + 1] <- pixel[x, y + 1] + error * 5/16
                }

                if(x < n && y < m){
                    pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 1/16
                }

        }
    }

    ditheredImage <- new(class(img)[[1]], original = img@original,
                         current = pixel, operations = img@operations)

    return(cropPixels(ditheredImage))
}

For the future some kind of Kernel function would be nice to be able to apply different kernels to pictures. But now let’s test it.

imgFS <- floydSteinbergDithering(img)
plot(imgFS)
The same picture with the Floyd-Steinberg algorithm applied to it.

That’s awesome! It almost looks like we had different gray values in our pictures. And there are just some minor artifacts introduced by it, meaning some appearent structures, that aren’t actually present in the original.

Now let’s try another method, which has a larger kernel.

Minimized Average Error Dithering

This dithering method was introduced by Jarvis et al from the famous Bell Lab in 1976. So you see that this whole field is pretty old. And some of you might remember a time, where it was actually difficult to transmit data from one location to another. I still remember being a six year old child waiting minutes on the NASA homepage to load one picture of a stellar nebular. Today image compression is of course still important for things like dynamic homepages, especially if they are mobile friendly.

OK, now let’s come to the actual method. It is called minimized average error. And again the Wikipedia article on it is pretty good.

This time the neighborhood of your pixel is increased by a range of one. Let me show you the implementation…

minimizedAverageErrorDithering <- function(img, transformPaletteFunction = round){
    pixel <- img@current

    n <- dim(pixel)[1]
    m <- dim(pixel)[2]

    for(y in seq(m)){
        for(x in seq(n)){

            oldPixel <- pixel[x,y]
            newPixel <- transformPaletteFunction(oldPixel)

            error <- oldPixel - newPixel

            pixel[x,y] <- newPixel

                if(x < n){
                    pixel[x + 1, y    ] <- pixel[x + 1, y    ] + error * 7/48
                }
                if(x < n - 1){
                    pixel[x + 2, y    ] <- pixel[x + 2, y    ] + error * 5/48
                }

                if(x > 2 && y < m){
                    pixel[x - 2, y + 1] <- pixel[x - 2, y + 1] + error * 3/48
                }

                if(x > 1 && y < m){
                    pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 5/48
                }

                if(y < m){
                    pixel[x    , y + 1] <- pixel[x    , y + 1] + error * 7/48
                }

                if(x < n && y < m){
                    pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 5/48
                }

                if(x < n - 1 && y < m){
                    pixel[x + 2, y + 1] <- pixel[x + 2, y + 1] + error * 3/48
                }

               if(x > 2 && y < m - 1){
                    pixel[x - 2, y + 2] <- pixel[x - 2, y + 2] + error * 1/48
                }

                if(x > 1 && y < m - 1){
                    pixel[x - 1, y + 2] <- pixel[x - 1, y + 2] + error * 3/48
                }

                if(y < m - 1){
                    pixel[x    , y + 2] <- pixel[x    , y + 2] + error * 5/48
                }

                if(x < n && y < m - 1){
                    pixel[x + 1, y + 2] <- pixel[x + 1, y + 2] + error * 3/48
                }

                if(x < n - 1 && y < m - 1){
                    pixel[x + 2, y + 2] <- pixel[x + 2, y + 2] + error * 1/48
                }
        }
    }

    ditheredImage <- new(class(img)[[1]], original = img@original,
                         current = pixel, operations = img@operations)

    return(cropPixels(ditheredImage))
}

You wanna see it’s effect, don’t you? Here you go…

imgMea <- minimizedAverageErrorDithering(img)
plot(imgMea)
The same picture with Minimized Average Error applied to it.

Do you see the difference? I think we got rid of the artifacts! Isn’t that amazing? I really love how demonstrative image processing is.
But that’s it for today… See you soon!

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

Please follow and like us:
error0