Wiki Loves Earth

And So Do I

How you might have guessed since my first post about image processing, I’m really interested in photography. And for some time I’ve been trying to showcase my photography. One of those endeavors has been Instagram, with which I’m not really satisfied. Roughly speaking It’s just not the place for serious photography. It’s just the way Instagram is made…
So a few days ago I stumbled upon Wiki Loves Earth, which is an annual photography contest by Wikipedia. Pictures you submit to this contest will be part of Wikimedia and be licensed under creative commons. While this basically takes away the ability to make money with my pictures, it opens up the possibility, that my pictures would be used by Wikipedia articles… Which would be nice!
And the prizes for the Wiki Loves Earth contest are also nice.

A wood duck hiding behind some grass.
A picture of a wood duck I took at the DFG

Furthermore Wikipedia has become quite an important tool for basically everyone. If I want to learn about a new topic, Wikipedia is often the first place I’ll visit. And I can’t really express, how much Wikipedia helped me up until now. So it’s probably also time to give something back.
And I took a lot of pictures of nature, so I have alot to contribute to this contest. But I also already uploaded some pictures from buildings I took during my travels. If you’re interested, you can look my uploads up here.

So let’s see how this competition goes for me. 🙂 I don’t count on winning anything, because there are so many much more skilled photographers out there than me. But who knows, what will come of it?

And what about you? Do you have anything to contribute to the Creative Commons? 😉

Living Without Smartphone

What I Learnt Losing My Smartphone

Well… Some of you might know, some won’t. I lost my smartphone a few weeks ago during a weekend trip to Bonn. Stuff like that can happen. I must have forgotten it on a bench, where I sat with my friends for a while. My panic was big, when I realized I didn’t have my phone anymore. We returned to the aforesaid bench to no avail. It wasn’t there anymore… Maybe I lost it somewhere else. I even thought, that somebody stole it.
“You look sad, David”, said one of my friends to me. And I really was at that moment. It’s funny, how the loss of such a little, lifeless thing can make you sad. I don’t know, if it was its value, the suspicion someone took it or the stress it would mean to me, that made me sad. But I definitely was.

A picture of the river Rhine with some skyscrapers in the background.
A picture of the Rhine river taken at the place where I probably lost my smartphone.

About Two Weeks Without Smartphone

The next two weeks I was without smartphone. And to make a long story short, mostly it wasn’t so bad. On the other hand I would say, it was mostly a positive experience. I found out, that I actually didn’t need Google Maps for navigating reality. Something I always suspected, but mostly I haven’t been brave enough to try it out. Furthermore it felt liberating not to be instantly reachable all the time. Not having to worry about WhatsApp or Telegram messages for once was really relaxing. And I nevertheless accomplished meeting other people.
Really only the big bummer was everything where I needed a PIN via SMS for. But that made me think, that it could anyway good to have an extra smartphone for those PINs. Don’t know yet, but it might be a more robust system.

Getting My Phone Back

Well, it appears that I really lost my smartphone. I almost lost faith, but I kept looking on the homepage of the lost and found in Bonn. And one miraculous day there was my phone!!! 🙂
So nobody stole it, but most likely someone found it. Namely before we came back to the bench. And this person must’ve brought to the lost and found. I’m really thankful to this anonymous stranger. Everything’s back to normal for now. But I’m planning on changing my smartphone habits. I don’t know how yet. But I will probably keep you updated.

So have a nice evening!

Image Dithering

I showed you a lot of elementary stuff regarding image processing lately, so now it’s time to do something nice. What I will show you is called image dithering. But first let me give you a motivation for it

Let’s say you have a grayscale picture and you want it to paint with really only black and white. Therefore I will demonstrate you, what happens, if you just “round” the pixel values to black and white. For this purpose I will use a lot of functions I implemented in the previous posts. At first we have to load the picture.

library(Raspository)
imgColor <- imageRGBFromJpeg("Wagner.jpg")
plot(imgColor)
plot of chunk unnamed-chunk-1
A picture from the Nibelungen Halle in Königswinter of a monument to Richard Wagner, who had a neckbeard, before the invention of the internet.

Next step is, that we have to convert it to grayscale. Remember the function I implemented in this post?

img <- imageBWFromRGB(imgColor, c(0.3, 0.59, 0.11))
plot(img)
The same picture as a gray scale image.

Our eyes don’t treat all the colors equally, that’s why I choose this proportions for the colors to the grayscale. I took the values from this post on tutorialspoint.

Now let’s just quick an dirty convert this to completely black and white.

imgTmp <- img
imgTmp@current <- round(imgTmp@current)
plot(imgTmp)
The same picture as a with gray scales either rounded to black or white.

Wow! We certainly lost basically all the information in the picture.

Implementing Image Dithering

Before I show you how to implement a version of dithering I will shortly explain you the idea behind it. Let me ask you one question… Is there really such a thing as gray? Or how exactly would you define gray?
Quite simply as a mixture of black and white. Now think about colors and the screen, you’re probably sitting in front of. You have basically only three colors (RGB!). And each of the pixels on your screen consists of three sub-pixels, one of each of those three colors.
You perceive them not as individual dots, because their too close together for your eyes to distinguish them. Now’s the question, if we could something similar in this black and white case? And of course: Yes, we can! And it is called image dithering, which is by the way also applicable to colored images.

The idea now is that you iterate over all of your pixels ans apply to each of them still some kind of round function. But then you propagate the difference of the original pixel and the rounded pixels to its neighbors, that are still to be processed.
But of course there are also different methods in doing so. I’ll show you to of them today.

Floyd Steinberg dithering

Let’s begin with the Floyd Steinberg Algorithm. I suggest you to read the corresponding Wikipedia article, as it is very straightforward.

And my implementation is also pretty straightforward.

floydSteinbergDithering <- function(img, transformPaletteFunction = round){
    pixel <- img@current

    n <- dim(pixel)[1]
    m <- dim(pixel)[2]

    for(y in seq(m)){
        for(x in seq(n)){

            oldPixel <- pixel[x,y]
            newPixel <- transformPaletteFunction(oldPixel)

            error <- oldPixel - newPixel

            pixel[x,y] <- newPixel


                if(x < n){
                    pixel[x + 1, y] <- pixel[x + 1, y] + error * 7/16
                }

                if(x > 1 && y < m){
                    pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 3/16
                }

                if(y < m){
                    pixel[x, y + 1] <- pixel[x, y + 1] + error * 5/16
                }

                if(x < n && y < m){
                    pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 1/16
                }

        }
    }

    ditheredImage <- new(class(img)[[1]], original = img@original,
                         current = pixel, operations = img@operations)

    return(cropPixels(ditheredImage))
}

For the future some kind of Kernel function would be nice to be able to apply different kernels to pictures. But now let’s test it.

imgFS <- floydSteinbergDithering(img)
plot(imgFS)
The same picture with the Floyd-Steinberg algorithm applied to it.

That’s awesome! It almost looks like we had different gray values in our pictures. And there are just some minor artifacts introduced by it, meaning some appearent structures, that aren’t actually present in the original.

Now let’s try another method, which has a larger kernel.

Minimized Average Error Dithering

This dithering method was introduced by Jarvis et al from the famous Bell Lab in 1976. So you see that this whole field is pretty old. And some of you might remember a time, where it was actually difficult to transmit data from one location to another. I still remember being a six year old child waiting minutes on the NASA homepage to load one picture of a stellar nebular. Today image compression is of course still important for things like dynamic homepages, especially if they are mobile friendly.

OK, now let’s come to the actual method. It is called minimized average error. And again the Wikipedia article on it is pretty good.

This time the neighborhood of your pixel is increased by a range of one. Let me show you the implementation…

minimizedAverageErrorDithering <- function(img, transformPaletteFunction = round){
    pixel <- img@current

    n <- dim(pixel)[1]
    m <- dim(pixel)[2]

    for(y in seq(m)){
        for(x in seq(n)){

            oldPixel <- pixel[x,y]
            newPixel <- transformPaletteFunction(oldPixel)

            error <- oldPixel - newPixel

            pixel[x,y] <- newPixel

                if(x < n){
                    pixel[x + 1, y    ] <- pixel[x + 1, y    ] + error * 7/48
                }
                if(x < n - 1){
                    pixel[x + 2, y    ] <- pixel[x + 2, y    ] + error * 5/48
                }

                if(x > 2 && y < m){
                    pixel[x - 2, y + 1] <- pixel[x - 2, y + 1] + error * 3/48
                }

                if(x > 1 && y < m){
                    pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 5/48
                }

                if(y < m){
                    pixel[x    , y + 1] <- pixel[x    , y + 1] + error * 7/48
                }

                if(x < n && y < m){
                    pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 5/48
                }

                if(x < n - 1 && y < m){
                    pixel[x + 2, y + 1] <- pixel[x + 2, y + 1] + error * 3/48
                }

               if(x > 2 && y < m - 1){
                    pixel[x - 2, y + 2] <- pixel[x - 2, y + 2] + error * 1/48
                }

                if(x > 1 && y < m - 1){
                    pixel[x - 1, y + 2] <- pixel[x - 1, y + 2] + error * 3/48
                }

                if(y < m - 1){
                    pixel[x    , y + 2] <- pixel[x    , y + 2] + error * 5/48
                }

                if(x < n && y < m - 1){
                    pixel[x + 1, y + 2] <- pixel[x + 1, y + 2] + error * 3/48
                }

                if(x < n - 1 && y < m - 1){
                    pixel[x + 2, y + 2] <- pixel[x + 2, y + 2] + error * 1/48
                }
        }
    }

    ditheredImage <- new(class(img)[[1]], original = img@original,
                         current = pixel, operations = img@operations)

    return(cropPixels(ditheredImage))
}

You wanna see it’s effect, don’t you? Here you go…

imgMea <- minimizedAverageErrorDithering(img)
plot(imgMea)
The same picture with Minimized Average Error applied to it.

Do you see the difference? I think we got rid of the artifacts! Isn’t that amazing? I really love how demonstrative image processing is.
But that’s it for today… See you soon!

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

Converting HSV To RGB

This post is now the sequel to this week’s post about color spaces. I think it will be a rather short post, where I just implement the conversion back HSV to RGB and then test my methods for demonstration.
In doing so I used the corresponding wikipedia post as a rough guideline.

So let me show you how I did it.

library(Raspository)
calculateRGBvalue <- function(H, C, X, m){
   if(H >= 0 && H <= 1){
       return(c(m + C, m + X, m))
   }else if(H >= 0 && H <= 2){
       return(c(m + X, m + C, m))
   }else if(H >= 0 && H <= 3){
       return(c(m, m + C, m + X))
   }else if(H >= 0 && H <= 4){
       return(c(m, m + X, m + C))
   }else if(H >= 0 && H <= 5){
       return(c(m + X, m, m + C))
   }else if(H >= 0 && H <= 6){
       return(c(m + C, m, m + X))
   }else{
       return(c(0,0,0))
   }
}
require(abind)
## Loading required package: abind
hsvArrayToRgb <- function(hsvArray){

    # Calculate the chroma
    C <- hsvArray[,,3] * hsvArray[,,2]

    H<- hsvArray[,,1] / 60

    X <-  C * (1 - abs(H %% 2 - 1))

    m <- hsvArray[,,3] - C
    rgb<-mapply(FUN = calculateRGBvalue, H = H, C = C, X = X, m = m)

    rgbArray<-abind(matrix(rgb[1,], nrow = nrow(hsvArray)),
                matrix(rgb[2,], nrow = nrow(hsvArray)),
                matrix(rgb[3,], nrow = nrow(hsvArray)),
                along = 3)

    return(rgbArray)
}


imageRGBFromHSV <- function(img){
    return(new("imageRGB", original = hsvArrayToRgb(img@original),
               current = hsvArrayToRgb(img@current),
               operations = img@operations))
}

So far so good… That’s basically just an one to one implementation of the method from the Wikipedia article with the one difference being, that I add m beforehand and not later.

So let’s come to the fun part!

image <- imageRGBFromJpeg("Mountain.jpg")
## Loading required package: jpeg
plot(image)
The original pictures

What a ginormous mountain! But it isn’t even the tallest mountain in Saarland, believe it! Now we can also plot this as an black and white picture.

plot(imageBWFromRGB(image))
Picture in black and white

Then let’s convert to HSV and plot the S (saturation) and V (value/brightness) channel as black and white picture.

hsv <- imageHSVFromRGB(image)
plot(as.raster(hsv@current[,,2]))
The saturation component of the pictures

How that looks like something from the 90s!

plot(as.raster(hsv@current[,,3]))
The brightness component of the pictures

OK, and now compare this to the conversion to black and white from RGB. It’s a worse picture in my opinion, but in comparison to the other one it holds the accurate information about the lighting condition of each pixel. You see, how this could be useful in Computer Vision e.g.?
Let’s just say, you’re searching for something in your pictures, that absorbs a lot of light. Then this representation could be of use.

And last, but not least we have to convert the image back to confirm that we actually get back the original picture.

plot(imageRGBFromHSV(hsv))
The pictures first converted to HSV and then back to RGB

Looks the same! I guess that concludes this blog post. And I already have some plans for the next few posts.
So, hopefully see you! 🙂

Introduction To Color Space

For now we have only looked at black and white pictures, but of course I know y’all are craving for me talking about color pictures! And that’s, what I will do today… After I visited a lecture about it.
What I will exactly do today is implement a S4 class for colored pictures with an RGB color space similar to one for black and white picture and then implement one with an HSV-color space and add corresponding transformation methods between the two classes.
So pretty basic stuff as well, but especially the HSV version could be pretty useful for manipulating pictures or further using them in Computer Vision procedures.

A colorful picture with the upper half being the sky with clouds and the lower half the view over the Spandau Citadel and waters in the background.
A picture taken in Berlin-Spandau at the Citadel – Scenes like this are a nice remainder would a beautiful thing color vision is. 🙂

RGB Color Space

Before we have only looked at pixels with a one dimensional value between zero and one.
In a more mathematical way, we looked at a function:

\(
\mathbb{N} \times \mathbb{N} \rightarrow [0,1]
\)

If this notation confuses you… Just ignore it! But in other words: The picture is a function of two natural numbered coordinates, that gives you back a real numbered value (an intensity) between zero and one.

If we want to do the same with colored images it is useful to define each pixel as having three dimension. One for Red, one for Green and one for Blue. Therefore the name RGB.
This way we would have a function:

\(
\mathbb{N} \times \mathbb{N} \rightarrow [0,1] \times [0,1] \times [0,1]
\)

Again in other words: The picture is a function of two natural numbered coordinates, that gives you back a tuple1 of three real numbered values between zero and one.
By the way this RGB color space is probably what you’re looking at right now, as computer monitors usually use it, but of course things in reality are more complicated. There are more than three wavelengths of light, but as our color vision system has only three different color receptors, a RGB color space is good enough for the human eye. But of course in scientific system, it could very well be, that you want to have more than three colors in a picture. Coming originally from Molecular Biology I know that there are many distinct wavelengths, that have some Biological importance. E.g. where certain molecules or functional groups of molecules absorb light.
So you could imagine using color spaces for higher dimension for some scientific applications.

But let’s return to the topic of RGB color spaces. In programming terms one way of implementing it would be with a three dimensional array, which is pretty straight forward.
So without further ado let me show you my implementation. It’s pretty simple, as the jpeg already returns the right kind of array, when reading a jpeg.

imageRGB <- setClass("imageRGB", slots=list(original="array", current="array", operations="list"))
imageRGBFromJpeg <-function(pathToJpeg){
    require(jpeg)
    image<- readJPEG(pathToJpeg)
    return(new("imageRGB", original = image, current = image, operations = list()))
}
plot.imageRGB <- function(object){
    plot(as.raster(object@current))}

Converting To Black And White

Now I also want a method to convert an RGB image into a black and white one.

library(Raspository)
imageBWFromRGB <- function(img, chPortion = c(0.33, 0.33, 0.33)){
    if(sum(chPortion) > 1){
        stop("Channel portions mustn't add up to more than one.")
    }
    original <- img@original[,,1] * chPortion[1] + img@original[,,2] * chPortion[2] + img@original[,,3] * chPortion[3]
    current <- img@current[,,1] * chPortion[1] + img@current[,,2] * chPortion[2] + img@current[,,3] * chPortion[3]
    return(new("imageBW", original = original, current = current, operations = img@operations))
}

The parameter chPortion is a vector that indicates how much each channel from the original color space should contribute to the result black and white picture. As a matter of course its components aren’t allowed to add up to more than one.

HSV Color Space

Another color space I want to show you today is the HSV. While you could imagine RGB is a cube, in the same analogy HSV would be a cylinder. With one polar coordinate H, which stands for hue. This coordinate determines the color at a given pixel.
The other two coordinates are S saturation and V the value (brightness) of a pixel. Those two coordinates are rational numbers between zero and one as before.

Now you could ask, why this representation could be useful and the answers are many. But let me just name you the editing of pictures as one of them. Let’s say you have a picture that is to dark, but the colors itself are OK. Then you could use this HSV color space for changing the value/brightness, while leaving hue and saturation the same.
But of course there are many other color spaces having other neat properties you might wanna exploit.
However instead of dreaming about the perfect color space, which feels like magic, let’s just implement this one, including the conversion from RGB.
We’ll also need some helper functions for it. And please, if you don’t understand something, just ask me in the comments. There should be no shame in don’t understanding something and I will be happy to give you an explanation to help you understanding it.

imageHSV <- setClass("imageHSV", slots=list(original="array", current="array", operations="list"))

calculateHUE <- function(max, maxIndex, min, r, g, b){
    h <- 0.0
    if(max == min){
        return(h)
    }else if(maxIndex == 1){
        h <- 60.0 * ((g - b)/(max - min)) 
    }else if(maxIndex == 2){
        h <- 60.0 * (2.0 + (b - r)/(max - min)) 
    }else if(maxIndex == 3){
        h <- 60.0 * (4.0 + (r - g)/(max - min)) 
    }

    # if the value is negativ add 360° to it
    if(h >= 0){
        return(h)
    }else{
        return(h + 360)
    }
}

rgbArrayToHsv <- function(rgbArray){
    # get the maximal color and its index in each pixel
    max <- apply(rgbArray, c(1,2), max)
    maxIndex <-apply(rgbArray, c(1,2), which.max)
    # get the minimal color in each pixel
    min <- apply(rgbArray, c(1,2), min)

    # calculate the hue for each pixel
    h <- mapply(FUN = calculateHUE, max = max, maxIndex = maxIndex, min = min, 
                r = rgbArray[,,1], g = rgbArray[,,2], b = rgbArray[,,3])
    # convert vector back to matrix
    h <- matrix(h, ncol = ncol(max))

    # calculate saturation
    s <- (max - min)/max
    # set values to zero, where max is 0 (division by zero -> NA)
    s[is.na(s)] <- 0
    # max is equal to v (value/brightness)
    v <- max

    # bind matrices together to array and return
    require(abind)
    hsvArray <- abind(h, s, v, along = 3)
    return(hsvArray)
}

imageHSVFromRGB <- function(img){
    return(new("imageHSV", original = rgbArrayToHsv(img@original), 
               current = rgbArrayToHsv(img@current), 
               operations = img@operations))
}

Phew, this took me more effort than expected. Luckily I feel comfortable using the various apply functions in R. They might be alien at first, but there’s a lot of neat stuff you can do, if you can use them.
But one side note… The calculateHUE function is very time consuming, because it is called a lot. So I think it would be worth while to re-implement it at some point of time in C++ to speed up things. Probably a good exercise for the future. I’ll keep you posted. 🙂

Anyway that’s enough for me today… David tired now. 😮 The conversion from HSV to RGB will probably come the next few days.
So, see ya!

Availability Of The Code

And of course you can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageRGB.R and R/imageHSV.R.

Bookmark-Coloring Algorithm

And Writing Documentations In R

I wrote this function months ago, while writing the report for my seminar and I wanted to my make a post about this Bookmark-Coloring Algorithm ever since, but I never found the right chance. As I yesterday in the evening wrote some documentation for the Raspository, I thought it would be nice to make a post about this specific topic. Therefor I thought that using the Bookmark-Coloring Algorithm as an example would be nice.

The Bookmark-Coloring Algorithm

This specific algorithm’s use is to generate a personal pagerank. In more detail this algorithm calculates given a starting website the chance to end up at other websites. But this idea is applicable to other fields as well. In my post about network-based integration through heat diffusion I showed you a similar method applied to a network of multi-omic data. On the same data-set you could use the Bookmark-Coloring Algorithm.
The basic idea behind the Bookmark-Coloring Algorithm is that you have some color that diffuses through a network, which is in my opinion equivalent to heat diffusing in a network. Correct me, if I’m wrong.

I implemented the algorithm following the paper about it by Pavel Berkhin(10.1080/15427951.2006.10129116). More precisely I implemented the Algorithm 2 from the paper. So let me show my implementation:

require(igraph)
require(dequer)
BCA<- function(graph, v, retentionCoefficient = 0.4, tol = 0.001){
    # initialise vector of transition chances
    p<-c()
    p[V(graph)] <- 0

    q <- queue()
    pushback(q, v)

    # initialise vector that indicates how much color is in one node
    colorInVertex <- c()
    colorInVertex[v] <- 1

    # execute as long queque q has elements
    while(length(q) > 0){
        i <- pop(q)
        w <- colorInVertex[i]
        # use up the color in node
        colorInVertex[i] <- NA

        p[i] <- p[i] + retentionCoefficient * w

        # if all color is used up continuew to next element in queque
        if(w < tol){

            next
        }

        # execute for all neighbors
        for(j in neighbors(graph, i, mode = "out")){
            if(!is.na(colorInVertex[j])){
                # add color to neighbor
                colorInVertex[j] <- colorInVertex[j] +
                    ((1 - retentionCoefficient) * w/degree(graph, i, mode = "out"))
            }else{
                # initialise color in neighbor
                pushback(q, j)
                colorInVertex[j] <- (1 - retentionCoefficient) * w/degree(graph, i, mode = "out")
            }
        }

    }

    return(p)
}

I wrote some comments, that hopefully help you to understand, what’s going on. That’s also the first part of documentation. It’s the step you’ll probably do, while writing your code and in my opinion it’s always useful.

So are we done with the documentation? No. Not, if we want to this function into a package.

roxygen2 documentation

roxygen2 is a nice package allowing you to write in-line documentation in R. I won’t go to much into detail about it here as there are lot of online sources for it1, but I will show you a short example, how to do it!

Now let me show, how to write a documentation for the BCA function, I will go over all the specified tags.

#' Bookmark Coloring Algorithm
#' 
#' @aliases BookmarkColoringAlgorithm
#' 
#' @description This function calculates a teleportation vector from a given 
#' starting node to other nodes in a given network.
#'
#' @export BCA
#' @import dequer
#' @import igraph
#'
#' @param graph an object of type \code{\link[igraph]{igraph}}.
#' @param v a starting vertex from the above graph. Can be either its identifier 
#' or a igraph.vs object.
#' @param retentionCoefficient the restart probability for each node.
#' @param tol a tolerance treshold, indicating what the smalltest value of color 
#' is, that should propagate further
#'
#' @return a preference/teleportation vector
#'
#' @references \insertRef{Berkhin2006}{Raspository}
#'
#' @examples
#' library(igraph)
#' g <- make_ring(5)
#' preferenceVector <- BCA(g, 1)
BCA <- function(graph, v, retentionCoefficient = 0.4, tol = 0.001){

OK, let’s go over some of the tags…
In the first line of course you have the title of the function. Additional to the description, you can also add a details tag, where description should give a short overview over the method, you could include theoretical background in the details.
Then needed packages are imported. roxygen2 will convert them into lines in your NAMESPACE file.

With params and return you should shortly describe their types and what they should contain.

For the following use of the references tag, you also need to import the Rdpack package and include a “REFERENCES.bib” in the inst folder with the regarding BibTeX entries. In my opinion you should always use this, when implementing some method from some source… Be it a book or a paper. Rdpack imports those references automatically from your BibTeX file into the documentation files.

Last, but not least I included a runnable example. This is important to give your user a starting point on how to use your function, but furthermore it is a test for the function. Each time your package is built, example code is run. So you will be notified, if there are any errors.
But we will go more into automated testing another time. Because there is of course more you can do. But you should always write example code, if your function is visible to the user.

After writing this documentation in your file, you have to compile it by writing:

roxygen2::roxygenise()
BCA(graph, v, retentionCoefficient = 0.4, tol = 0.001)
Arguments

graph	
an object of type igraph.
v	
a starting vertex from the above graph. Can be either its identifier or a igraph.vs object.
retentionCoefficient	
the restart probability for each node.
tol	
a tolerance treshold, indicating what the smalltest value of color is, that should propagate further
Value

a preference/teleportation vector

References

Berkhin P (2006). “Bookmark-Coloring Algorithm for Personalized PageRank Computing.” Internet Mathematics, 3(1), 41–62. ISSN 1542-7951, doi: 10.1080/15427951.2006.10129116, http://www.internetmathematicsjournal.com/article/1412.
An excerpt from the generated documentation as it appears in RStudio

I hope that this short example will help you writing and documenting your own functions! 🙂

Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/BCA.R.

Bibliography

Salt And Pepper Noise

And Measuring Noise

Using the nomenclature developed in yesterday’s post I will today also implement a method for creating salt and pepper noise in images. This noise simulates dead pixels by setting them either to the lowest or highest grey value, in our case 0 or 1.

First let’s install and load the Raspository, where the methods and class developed yesterday are located in.

if (!requireNamespace("devtools", quietly = TRUE))
    install.packages("devtools")
devtools::install_github("David-J-R/Raspository")
library(Raspository)

Without further ado let me show you how to implement the salt and pepper noise.

saltAndPepperNoise <- function(object, percentage = .2){
    # select the indices to set to 0 or 1 at random
    indices <- sample(length(object@current), length(object@current) * percentage)
    # draw zeros and ones from a binomial distribution
    values <- rbinom(length(indices), 1, 0.5)

    object@current[indices] <- values
    return(object)
}

OK, our animal guest test subject today will be an owl.

set.seed(1)
owl <- imageBWFromJpeg("owl.jpg")
plot(owl)
A black and white picture of a snow owl, that looks somehow suprised.
A picture of a snow owl I took at the Saarbrücker Wildpark

It looks surprised, doesn’t it? It probably knows about the salt and pepper noise we will add right away.

owlNoise <- saltAndPepperNoise(owl)
plot(owlNoise)
The same picture with a lot of grainy noise added.
The same picture with salt and pepper noise added

Uhm, this looks really annoying now. The other kind of noises hadn’t been that disrupting. Just like dead pixels on your screen, they make you feel bad.

Introducing Two Measures of Noise


But what would be nice now as well, would be to have some measure of the noise. Let’s just use the mean squared error for it. I’ll implement a function that can compare two pictures or the current and original picture in an imageBW object depending on the input parameters.

MSE <- function(object, other = NULL){
    if(is.null(other)){
        errorMatrix <- object@original - object@current
    }else{
        errorMatrix <- other@current - object@current
    }
    squaredErrorMatrix <- errorMatrix ^ 2
    return(mean(squaredErrorMatrix))
}

Nice! Now we finally have a measure for our noise. Of course this will be especially helpful later, when we want to evaluate the performance of reconstruction methods.
So let’s test it.

MSE(owlNoise)
## [1] 0.05545031
MSE(owlNoise, owl)
## [1] 0.05545031

As expected1 both function calls leave us with the same result.

Let’s also implement the peak-signal-to-noise ratio, which gives us back a decibel value. The lower the value the more noise you have in the picture. You can calculate it according to the following formula, which I will also “simplify” a bit.

\(
PSNR(x, y) = 10 \cdot log_{10} \left( \frac{MAX^{2}}{MSE(x, y)} \right) \
\)

I hope you know the logarithm rules. 😛

\(
PSNR(x, y) = 10 \cdot log_{10}(MAX^{2}) – 10 \cdot log_{10}(MSE(x, y)) \
\) \(
PSNR(x, y) = 20 \cdot log_{10}(MAX) – 10 \cdot log_{10}(MSE(x, y))
\)

Where x and y are the two pictures to compare and MAX is the maximum value a pixel can take on.

And for MAX being one in our case we get:

\(
PSNR(x, y) = – 10 \cdot log_{10}(MSE(x, y))
\)

Subsequently, it’s now time to implement this measure as well:

PSNR <- function(object, other = NULL){
    mse <- MSE(object, other)
    return(-10 * log(mse, base = 10))
}

That was pretty straightforward. So let’s test it!

PSNR(owlNoise)
## [1] 12.56096

It’s a relatively low value, that indicates a lot of noise. In my opinion this value is more intuitive, probably because of use of the logarithm function. Indeed the MSE value alone didn’t seem that big.

But that’s it for now. There will be probably further stuff next week. But probably not that much.
Thanks for reading and have nice remaining weekend. 🙂

Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageNoise.R.
But as I will expand the code, it will also probably grow there.

Multiplicative Noise And S4 Classes

Let’s Add Some More Noise

Like in my yesterday’s post mentioned, there are more kinds of noise. So let’s first load today’s picture and then add some multiplicative noise!

library(jpeg)
pigeon<- readJPEG("pigeon.jpg")
pigeonBW <- pigeon[,,1]
plot(as.raster(pigeonBW))
Black and white picture of a pigeon sitting on a street light
The original image of a pigeon on a street light

Oh look it’s a pigeon on a lantern. I wonder what this little guy was thinking? But let’s not dwell on this. Rather let’s think how we can scramble up this picture!
I will first try multiplicative normal distributed noise, which is pretty straight forward, if you followed yesterday’s post.

set.seed(1)
pigeonNormalNoise <- pigeonBW * (1 + rnorm(length(pigeonBW), sd = sd(pigeonBW)))
pigeonNormalNoise[pigeonNormalNoise > 1] <- 1
pigeonNormalNoise[pigeonNormalNoise < 0] <- 0
plot(as.raster(pigeonNormalNoise))
Black and white picture of a pigeon sitting on a street light with some noise on the brigther areas
The same picture with multiplicative noise added

Can you see the difference to the additive noise from yesterday? Indeed the darker areas of the pictures seem nearly untouched from the noise. That’s because the intensity of the noise is now dependent on the intensity of the pixel. And intensity is what dark pixels are lacking… So to speak.

But it’s really annoying and redundant to have to write the same stuff over and over again, isn’t it? So let’s do, what a Computer Scientist is best in. Let’s generalize some methods.
But where to start?

In my view it would be useful to have some kind of an (black and white) image class, which saves the image and operations on it. Hence the process would also be reproducible, which is always nice.

imageBW <- setClass("imageBW", slots=list(original="matrix", current="matrix", operations="list"))
imageBWFromJpeg <-function(pathToJpeg){
    require(jpeg)
    image<- readJPEG(pathToJpeg)
    imageBW <- image[,,1]
    return(new("imageBW", original = imageBW, current = imageBW, operations = list()))
}
plot.imageBW <- function(object){
    plot(as.raster(object@current))}

As an illustration I also overwrote the default plot function. This makes our lives even easier. In the next step I’ll implement the different noise functions. There are probably more generalized ways of doing so, but for now this will suffice. I also haven’t come up with a good way of storing the done operations yet. So I’ll probably also do that another day.

cropPixels<- function(object){
    object@current[object@current > 1] <- 1
    object@current[object@current < 0] <- 0
    return(object)
}
addUnifNoise <- function(object){
    slot(object, "current") <- slot(object, "current") + 
        runif(length(slot(object, "current")), min = -1, max = 1)
    return(cropPixels(object))
}
addNormalNoise <- function(object, sd = NULL){
    if(is.null(sd)){
        object@current <- object@current + rnorm(length(object@current), sd = sd(object@current))
    }else{
        object@current <- object@current + rnorm(length(object@current), sd = sd)
    }
    return(cropPixels(object))
}
multiplyUnifNoise <- function(object){
    object@current <- object@current * (1 + runif(length(object@current), min = -1, max = 1))
    return(cropPixels(object))
}
multiplyNormalNoise <- function(object, sd = NULL){
    if(is.null(sd)){
        object@current <- object@current * ( 1 + rnorm(length(object@current), 
                                                       sd = sd(object@current)))
    }else{
        object@current <- object@current * ( 1 + rnorm(length(object@current), 
                                                       sd = sd))
    }
    return(cropPixels(object))
}

For the future I would also like to have this working with infix operators. Meaning that I could do stuff like image <- image + normalNoise(...) or image <- image * (1 + normalNoise(...)) so that I have a neat little grammar for working with images. However for the moment those functions will do the job. Now let us make use of the newly implemented methods and add a lot of noise to the picture.

image <- imageBWFromJpeg("pigeon.jpg")
image <- addNormalNoise(image)
image <- multiplyUnifNoise(image)
image <- addUnifNoise(image)
plot(image)
Black and white picture of a pigeon sitting on a street light with so much noise, that you can only recognize the outlines.
The same picture with lots of kinds of noises added

Haha, there’s not much left of this poor pigeons. But you can still see its and the lamp’s outlines. And I’m anyway quite certain that there are algorithms out there, that could reconstruct a lot of the original image. You will see that later on. 🙂

Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageOneChannel.R.
But as I will expand the code, it will also probably grow there.

How to Add Noise To Images

Or Let’s Make Some Noise!!

As I wrote in one of my last posts there will be some content about image processing and computer vision. And following my second lecture of it, here we go! Fortunately R has some capabilities of handling pictures, of which I will take use.

For now the topic will just be, how to add noise to images, which can be useful to you, e.g. if you want to prove the performance of an algorithm for handling noise. So we’re simulating noise. Natural noise in images can come from different sources… The most common of it would be due to high ISO values or/and long exposure times. So in general in low light conditions.
However I’ll only work with black and white pictures for today. Maybe color comes at some point in the future. 🙂

Loading Pictures Into

First I’ll show you how to load pictures into R and how to print them again. It’s quite easy tbh:

library(jpeg)
goose <- readJPEG("goose.jpg")
class(goose)
## [1] "array"
dim(goose)
## [1] 600 897   3

As the picture is black and white we only need one dimension of it. And we can just plot it by transforming it into a raster object:

gooseBW <- goose[,,1]
plot(as.raster(gooseBW))
A Canada goose grooming itself
A picture of a Canada goose I took in Saarbrücken

Cute this goose, isn’t it? 🙂 Now let’s destroy this picture with noise! Nyehehehehe1.!

Adding The Noise

To begin with I will add some uniform distributed noise. As the values in the matrix are between 0 and 1, I will add values between -1 and 1

set.seed(1)
gooseUniformNoise <- gooseBW + runif(length(gooseBW), min = -1, max = 1)

But we can’t plot this right, as the matrix now also contains values above 1 and below 0. I’ll fix this by cropping the boundaries and then plot.

gooseUniformNoise[gooseUniformNoise > 1] <- 1
gooseUniformNoise[gooseUniformNoise < 0] <- 0
plot(as.raster(gooseUniformNoise))
The same picture with uniform noise added. It now looks grainy
The same picture with uniform noise added

Interesting how you can still make out the goose in spite of the noise? Now let’s do the same with normal distributed noise. I’ll take the standard deviation of the black and white picture.

gooseNormalNoise <- gooseBW + rnorm(length(gooseBW), sd = sd(gooseBW))
gooseNormalNoise[gooseNormalNoise > 1] <- 1
gooseNormalNoise[gooseNormalNoise < 0] <- 0
plot(as.raster(gooseNormalNoise))
The same picture with uniform noise added. It now looks like an old black and white picture
The same picture with normal distributed noise added


In the lecture we also did multiplicative and impulse noise. However I won’t do them in this post, maybe another time.
Anyway, do you already see, why I like R so much? 🙂 If you do Linear Algebra and Statistics stuff it’s just super comfy!
I’ll show you probably even more reasons why, when I demonstrate you, how to denoise picture again. But that’s it for today.

Have a nice weekend!

Installation Of Coq On Ubuntu

When I talked in my my first post about giving stuff back, I also thought about simple tutorial like installing software. While such tutorials are not strictly necessary, they save time… They saved me a lot of time! Especially when you’re new to an operating system, university, a programming language etc etc. And sometimes you’re a lot of those things at once.
To put it simply, sometimes it’s just good being able to outsource something so that you can spend your time thinking about more urgent stuff.

So here we go… Today I will show how I installed Coq on my system, which is a software for writing and managing logical proofs. I need this software for my lecture Introduction to Computational Logic, which will take use of it.

For the impatient amongst you, I suggest you to skip to end… Just saying. :’D

So let me first show you, which Ubuntu release I’m using. Depending on your exact operating system, distribution and release your installation might vary.

lsb_release -d
## Description: Ubuntu 18.04.2 LTS

Alright, now that you know that, I first have to install opam, which is the package manager for OCaml. Phew… What a luck. With a Linux distribution based on Debian, it is available through the package manager. So I can just install it by typing:

sudo apt-get install opam

Subsequently I have to initialize the state of opam

opam init

Now let’s list all the through opam available packages.

opam list -a | head
## # Available packages for system:
## 0install                                --  The antidote to app-stores
## aacplus                                 --  Bindings for the aacplus library whi
## abella                                  --  Interactive theorem prover based on 
## abt                                     --  OCaml port of CMU's abstract binding
## acgtk                                   --  Abstract Categorial Grammar developm
## acme                                    --  A library to interact with the acme 
## acpc                                    --  Chemoinformatics tool for ligand-bas
## aez                                     --  Alt-Ergo Zero is an OCaml library fo
## afl                                     --  American Fuzzy Lop fuzzer by Michal 
## Fatal error: exception Sys_error("Broken pipe")

I piped the output into head, because there are quite a lot of packages, with whom I don’t wanna clutter up this post. Don’t mind the Broken pipe error, that’s because head kills opam after reading its first five lines.

Which packages we now want are Coq and the CoqIDE. We can achieve this by simply typing:

opam install coq coqide

OK… Didn’t work. What did opam say?

=> This package relies on external (system) dependencies that may be missing.
   `opam depext conf-gtksourceview.2' may help you find the correct
   installation for your system.

The former state can be restored with:
    opam switch import "~/.opam/system/backup/state-20190310192746.export"

Therefor let’s do, what the output tells us to:

opam depext conf-gtksourceview.2

And by installing some additional packages this command took care for me of it.
Afterwards let’s try to install CoqIDE again, because Coq itself worked fine the first time.

opam install coqide

This time it worked… Perfect!

A Much Simpler Way


OK, after I didn’t find out how to start this IDE I did some googling and found out, that I could also just install it from the Ubuntu repository using the following command:

sudo apt-get install coqide
A screenshot of the installed CoqIDE on Ubuntu
Screenshot of the CoqIDE on my system

I should’ve probably checked this first and I would’ve probably saved a lot of time. Now it works! x)
Maybe my foolishness of not checking the Ubuntu repo first teaches you something.

Have a nice evening!