# Tag Archives: Rstats

Rstats is a common hastag for the programming language called R. The stats comes from the part, that the language is used for statistical computing. But in my opinion the language can also find application to other areas.
I like to use it for its convinience of installing packages and seamlessly integrating C++ and other languages into it. Furthermore the language provides you with some useful features for ensuring reproducability. This is especially useful for data science, but also other areas of science.

As you guys might know, I’m studying Bioinformatics. For Bioinformaticians Bioconductor is a very useful project. For it contains various tools for this area of research written in R. And I’m even the maintainer of one of the packages there.

So let me show you all the nice things, you can do in R!

# Decomposing An Image Into Frequencies

The Fourier transform is a useful tool to disassemble a signal into its component frequencies. Amongst others it has applications in signal processing, quantum mechanics an even in Bioinformatics for the fast calculation of multiple alignments.
So it’s no wonder, that it is also used in image processing. It is e.g. often used for the design of filters and for image convolutions in general, as it makes it possible to calculate those pretty fast. It can furthermore be used for the elimination of regular noise in images. But those are both topics for another time.

Today I will show you how to calculate the Fourier spectrum of an image, which is in any case useful to visualize what the Fourier transformation actually does. And you can also use it to investigate repeating features in your image. And from there it’s just one step further for using it for denoising an image.

## How To Calculate It

I won’t got much into the theoretical background here right now, but I will give you the means to calculate it. If you want to get more intuition how the formulas for the Fourier transform are derived, I recommend you watching the following video of 3Brown1Blue.
To understand this part you should also at least know a little bit about the concept of imaginary numbers. But maybe I’ll do a quick tutorial in them at some point later. Just let me know, if you’d like to see that.

So what we actually need is the two dimensional discrete Fourier transformation, because pictures of course are two dimensional. Instead of discrete Fourier transformations there are also continuous ones, which can be applied to continuous functions.
So let me just give you the equations:

$$\hat{f}(p,q) = \frac{1}{\sqrt{n \cdot m}} \cdot \sum_{m = 0}^{M – 1}\sum_{n = 0}^{N – 1}f(n,m) \cdot e^{-\frac{i \cdot 2 \cdot \pi \cdot p \cdot m}{M}} \cdot e^{-\frac{i \cdot 2 \cdot \pi \cdot q \cdot n}{N}}$$

In this case N is the number of rows in the image and M the number of columns. The image itself is represented as the two dimensional function $$f(n, m)$$ and $$\hat{f}(p,q)$$ is the Fourier transformed image, where p and q are the new variables for indicating row and columns. And $$i$$ is the imaginary unit. It’s a solution to the equation $$x^2 = -1$$, which cannot be solved by any real number.

Now the value of each pixel is a complex number. Do you have any idea how we could visualize this in a sensible manner? Well… Complex numbers are basically just coordinates in a two dimensional space. So we could just take their absolute distance to the origin point in their coordinate system. Applying this to every pixel gives us the so-called Fourier spectrum of an image.

In the next parts I will step by step show you how to implement all of this and do some improvements on it. Luckily for us we don’t need to implement the Fourier transform ourselves. In R it is already contained in the form of the fft function and many other programming languages provide you with high performing implementations of it. Even though implementing the Fourier transform itself isn’t that big of a deal, performance could be. But maybe I’ll do some home-grown implementation of it at a later point in time.

## Some Helper Functions

First we’ll need some helper functions. I was thinking about giving some of them their own blog post. But then again they’re not that interesting, so they would warrant that. We will also use some helper functions developed in my last post.

To make our life easier, a min and max function for images for be a nice, returning us the min and max grey-values of an image. The implementation is straight forward:

min.imageOneChannel <- function(object, na.rm = FALSE){
return(min(object@imageMatrix, na.rm = na.rm))
}
max.imageOneChannel <- function(object, na.rm = FALSE){
return(max(object@imageMatrix, na.rm = na.rm))
}

Next I’ll introduce the first of many point operations. Point operations change the value of one pixel independent of neighboring pixels. As the Fourier spectrum of an image might have values below zero or above one at certain pixels, we will need to rescale it so that it is inside the boundaries again. We can to this by with the following function:

affineRescale <- function(img){
a <- max(img) - min(img)
b <- min(img) / a

return(affineGreyscaleTransformation(img = img, 1.0/a, -b))
}

After applying this function the minimal value will be equal to zero and the maximal equal to one and all other values will be scaled between them.

## The Fourier Spectrum

So let’s implement the calculation of the Fourier spectrum of an image. You’ll see, that it is pretty straight-forward…

calculateFourierSpectrum <- function(img){

imgFTrans <- fft(img@imageMatrix)

## calculate the eiclidean norm of the complex coordinates
fourierSpectrum <- new("imageOneChannel",
imageMatrix = sqrt(Re(imgFTrans) ^ 2 +
Im(imgFTrans) ^ 2))

return(fourierSpectrum)
}

Basically just two/three (depending on if you count the return) lines of code and the nice thing about the native fft function in R, is that it keeps the dimensions of the matrix.

Now let’s load our today’s image

library(Raspository)
fence <- imageOneChannelFromJpeg("Fence.jpg")

It’s a fence, which has a lot of repeating structures. So it should give us some interesting result.

fftImg <- affineRescale(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft1.jpg")

Huh?! What’s that? Nothing? Well, you can see a bit in the corners, but not much.

## The Fourier Spectrum Scaled With The Logarithm

To make the spectrum better visible, we need to rescale it in a different manner. You remember, that the equation for the Fourier transform contains the exponent function? So what would probably be sensible would be to take the logarithm to rescale the image. But remember… The logarithm for smaller or equal to zero is not defined. So it’s a good idea to add a one before scaling with the logarithm. The log1p function in R will already do this for us.

logarithmicDynamicCompression <- function(img, c = NULL){
if(is.null(c)){
c <- 1 / log1p(max(img@imageMatrix))
}

return(new("imageOneChannel", imageMatrix = c * log1p(img@imageMatrix)))
}

So let’s try it again.

fftImg <- logarithmicDynamicCompression(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft2.jpg")

Now that’s the Fourier spectrum of the fence. But there’s something not so nice about it. The origin of the spectrum is not in the middle, but on the borders. This is a certain property of the discrete form of the Fourier transformation. In the continuous one the origin lies in the middle.

## Shifting The Fourier Spectrum Of An Image

But lucky for us there’s a quick fix for this minor problem. We just need to multiply the image with an image of the same size first. This second image contains one and minus ones arranged in a checkerboard like pattern. I won’t show you here, why this is so… But the Fourier transformation has many neat properties you can use, so that I’ll probably do a post about them in the future too.

For creating the checkerboard sign matrix I use data.table magic. Also probably doing a post or more posts about this awesome data structure in the future, as it makes of the life of a data scientist much easier.

library(data.table)
calculateFourierSpectrum <- function(img){

## generate checkerboard sign imageMatrix to shift it in the fourier domain
DT <- CJ(1:nrow(img@imageMatrix), 1:ncol(img@imageMatrix))
DT[, sign := (-1)^(V1 + V2)]
shiftMatrix <- new("imageOneChannel", imageMatrix = as.matrix(dcast(DT, V1 ~ V2, value.var = "sign")[,-1]))

shiftedImg <- img * shiftMatrix

imgFTrans <- fft(shiftedImg@imageMatrix)

## calculate the eiclidean norm
fourierSpectrum <- new("imageOneChannel",
imageMatrix = sqrt(Re(imgFTrans)^2 + Im(imgFTrans)^2))

return(fourierSpectrum)
}

Finally it’s time to also test that!

fftImg <- logarithmicDynamicCompression(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft3.jpg")

And voila! Now I’m satisfied. 🙂
By the way the Fourier transform flips the direction of patterns. So the big vertical stripe could be the horizon. But it could also be a boundary artifacts… Something I’ll show you in more detail later. And also how to fix it.
All the stripes in the others direction represent frequencies across the picture. And what’s nice… You can change features of the image here in the Fourier space and then just as easily back-transform it.

You’ll definitely hear more from Fourier transformations from me. I hope you enjoyed this post. I’d love to hear from you.

## Availability Of The Code

You can access a maintained form of the code in my GitHub repository Raspository under R/imageOneChannel, R/imagePointOperations and R/imageFeatures.

Please follow and like us:

# Or More Object Oriented Programming (OOP) in R

As I told you in one of my recent posts I implemented a lot of functions related to image processing and built a lot on my image processing framework in the R programming language. Changes I did include methods I learned in the lecture, e.g. convolutions, filters of different kinds and calculation of statistical features. But I also did some code refactoring and included some new helper methods.
A set of those helper methods will be the topic of this blog-post. I’m talking about arithmetic image operators. We will need those a lot in later exercises. So it’s useful to have them implemented in an easy-to-use way.

So the basic idea ist just to overload the different arithmetic operators to make them useable with images of the type imageOneChannel1.
It should be possible to use the functions either with two images or one image and one scalar.

## The is. Function

As a little helper we will implement the is.imageOneChannel function. Those functtions with the prefix is are used in R to test if any object is of a given type. It doesn’t save us much time from using the inherits function from the R base, but it makes the process at least somewhat more comfortable.
To be honest what I’ve implemented is just a glorifyed wrapper around the inherits function, but that’s generally how it is done. Of course that leaves us also with the possibility to write a little documentation using roxygen2. So here we go:

#' Is It A One Channel Image
#'
#' @description Checks if an object is of type \code{\link[Raspository]{imageOneChannel}}.
#'
#' @export
#'
#' @param x any R object.
#'
#' @return \code{TRUE}, if the object is of type \code{\link[Raspository]{imageOneChannel}}.
#'  \code{FALSE} otherwise.
#'
#'
#' @examples
is.imageOneChannel <-function(x){
return(inherits(x, "imageOneChannel"))
}


So, that was easy. And it’s not going to get any harder this post actually.

## Arithmetic Image Operators

Overloading an arithmetic operator in R is pretty simple as well. Take a look at the following:

+.imageOneChannel <- function(A, B)

That will be the function header of our plus operator. Let’s analyse it bit by bit.
First the function name is sorrounded by backticks. That’s the way, how you define so called infix operators2, as they contain special symbols, which function names normally can’t.

Then we have the plus sign and after the dot the class name imageOneChannel, which indicates that this is the plus operator for this specific class. Later you don’t call this function as +.imageOneChannel, but just as plus. And as far I noticed it this special operator is always called, as long as one of the two parameters is of type imageOneChannel. But actually I don’t know what would happen, if the other summand would be of any other class, that also has its own plus operator defined. Anyone an idea? But I’ll probably test that at some point of time.
And last like in any other function <- function(A, B) assigns the function to the function name. As a plus function this contains just two parameters.

Now let me show you my full implementation of this arithmetic image operator with documentation:

#' Image Addition
#'
#' @description Addition of one image with another image or a scalar. At least
#' one of the two parameters has to be an image. If both are images, the operation
#' is executed entrywise.
#'
#' @export
#'
#' @param A An image of class \code{\link[Raspository]{imageOneChannel}} or
#' a scalar.
#' @param B An image of class \code{\link[Raspository]{imageOneChannel}} or
#' a scalar.
#'
#' @return The resulting image of class \code{\link[Raspository]{imageOneChannel}}.
#'
#'
#' @examples
+.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix + B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A + B@imageMatrix))
}

}else{
return(new("imageOneChannel", imageMatrix = A@imageMatrix + B))
}

}


Pretty easy, isn’t it? We just need to check which one of the two parameters is an image and then access its matrix slot.
I mean for now our imageOneChannel also isn’t much more than a wrapper around a standard matrix in R. But that’s ok, because we definitly want to have some different behaviour defined later on than in a standard matrix. And We also need some methods, that not necessarily make sense for matrices containing other things than pixels.
Some people might know, that I’m not the biggest proponent of OOP and I actually think it’s overused. But if you have some data type, that should express some specific to it behaviour, it’s sensible. Just don’t implement a convoluted inheritance hierarchy with lots abd lots of stuff, you’ll never need and people need at least a PhD to understand. Keep it simple, stupid!

The other arithmetic image operators are implemented in the same manner. I’ll spare you the documentations this time, as they’re essentially all the same, but you can find them in my repository:

-.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix - B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A - B@imageMatrix))
}
}else{
return(new("imageOneChannel", imageMatrix  = A@imageMatrix - B))
}

}
*.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix * B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A * B@imageMatrix))
}
}else{
return(new("imageOneChannel", imageMatrix = A@imageMatrix * B))
}

}
/.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix / B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A / B@imageMatrix))
}
}else{
return(new("imageOneChannel", imageMatrix = A@imageMatrix / B))
}

}

## Testing The Arithmetic Image Operators

library(Raspository)

We can now do lot’s and lots of things with those operators. So first let’s load our test image…

kadser <- imageOneChannelFromJpeg("kadser.jpg")

Without looking at the original image let’s take a look at the negative of the picture.

negativeKadser <- 1 - kadser
writeJPEG.imageOneChannel(negativeKadser, "negativeKadser.jpg")

Spoopy, isn’t it?

And now let’s add this picture together with another one. We’ll have to average them, so that we don’t exceed the maximal grey-value.

owl0r <- imageOneChannelFromJpeg("owl0r.jpg")
addedImages <- (kadser + owl0r) / 2
writeJPEG.imageOneChannel(addedImages, "addedImages.jpg")

Well, this added image won’t win us any contests, but at least we can say now, that we can add images together. That will be valuable later. You’ll see.

That’salso it for today. See you soon!

## Availability Of The Code

You can access a maintained form of the code in my github repository Raspository under R/imageOperations.R and R/imageOneChannel.R.

Please follow and like us:

## Using Rcpp

Following my last post I will now show you how to integrate C++ code into a R package using Rcpp. Nothing revolutionary to do, but maybe it will still help some of you. 🙂
I will demonstrate this on my GitHub repository and R package BiocStyle::Githubpkg("David-J-R/Raspository"). And I will progress on optimizing the Dithering method I implemented last time. For the following I suppose, that you already use roxygen2.

## Motivation

So at first you should ask yourself why you wanna introduce C++ code to your package. One very valid reason might be, that the C++ code is already there and you just want to integrate it into R or maybe even just write a wrapper in R around it.
The main reason why I did this was to speed up my code. Some operations in R are already pretty fast. But if you want to write a function with a loop that does complicated things, that would be torture to implement with apply functions, if even possible, C++ is a good choice.

There might of course be even more reasons, but we will save them for another time…
Let’s now get to how to do it.

## Preparation For Integrating Rcpp Into R Packages

Before we can seamlessly integrate C++ using into our package we need to do some preparations. First we have to import the Rcpp package into our namespace and link to its dynamic library.
For this cause we add the following to the DESCRIPTION file of the package:

Imports: Rdpack, BEclear, futile.logger, stats, dequer, jpeg,    data.table, RobustRankAggreg, igraph, graphics, grDevices, stats, methods, abind, Rcpp
LinkingTo: Rcpp


And we also have to use the dynamic that will be created in our package. For this cause we add the following statement to the Raspository.R:

#' Raspository-package
#'
#' @author David Rasp
#'
#' @docType package
#'
#' @import Rdpack
#'
#' @title The Raspository
#'
#' @useDynLib Raspository
"_PACKAGE"

As far as I understand it you could also add this statement anywhere into the documentation of your package.

## Creating our function with Rcpp

So the translation of the dissipatePixel function from last time is pretty straight forward. It’s just changing the syntax a bit. I also added a roxygen skeleton. It also works the same as in R, just that you use two backslashes instead of a hash-tag to start a line. The namespace of Rcpp provides you with classes like NumericMatrix, that can be used similar to their R counterparts and the objects in R can be treated as them. If you want some more references, I would advise you to read the Rcpp quickref by Dirk Eddelbuettel. I have it open most of the time I’m working with it.

#include <Rcpp.h>
using namespace Rcpp;

//' Calculating the Dissipation of Pixels
//'
//' @export dissipatePixel
//' @import Rcpp
//'
//'
//' @references \insertRef{Hagenburg2009}{Raspository}
//'
//' @return the image as matrix after the next time step
//'
// [[Rcpp::export]]
SEXP dissipatePixel(const SEXP& imgOriginal, const double& minimalTreshold) {
NumericMatrix img(imgOriginal);
NumericMatrix imgAtNewStep(img.nrow(), img.ncol());

for(std::size_t i = 0; i < img.nrow(); i++){
for(std::size_t j = 0; j < img.ncol(); j++){

double currentPixel = img(i,j);
double dissipated = 0.0;

if(currentPixel > 1.0 || currentPixel < minimalTreshold){

double toDissipate = currentPixel;

if(currentPixel > 1){
toDissipate -= 1.0;
}

double tmp1 = toDissipate / 9.0;
double tmp2 = toDissipate / 36.0;

if(i > 0){
imgAtNewStep(i - 1, j) += tmp1;
dissipated += tmp1;
}

if(j > 0){
imgAtNewStep(i,j - 1) += tmp1;
dissipated += tmp1;
}

if(i < img.nrow() - 1){
imgAtNewStep(i + 1,j) += tmp1;
dissipated += tmp1;
}

if(j < img.ncol() - 1){
imgAtNewStep(i,j + 1) += tmp1;
dissipated += tmp1;
}

if( i > 0 && j > 0){
imgAtNewStep(i - 1,j - 1) += tmp2;
dissipated += tmp2;
}

if( i > 0 && j < img.ncol() - 1){
imgAtNewStep(i - 1,j + 1) += tmp2;
dissipated += tmp2;
}

if( i < img.nrow() - 1 && j > 0){
imgAtNewStep(i + 1,j - 1) += tmp2;
dissipated += tmp2;
}

if( i < img.nrow() - 1 && j > img.ncol() - 1){
imgAtNewStep(i + 1,j + 1) += tmp2;
dissipated += tmp2;
}

}else{

double tmp1 = currentPixel / 9.0;
double tmp2 = currentPixel / 36.0;

if( i > 1 && img(i - 1,j) > currentPixel && img(i - 1,j) < 1){
imgAtNewStep(i - 1,j) += tmp1;
dissipated += tmp1;
}

if( j > 0 && img(i,j - 1) > currentPixel && img(i,j - 1) < 1){
imgAtNewStep(i,j - 1) += tmp1;
dissipated += tmp1;
}

if(i < img.nrow() - 1 && img(i + 1,j) > currentPixel && img(i + 1,j) < 1){
imgAtNewStep(i + 1,j) += tmp1;
dissipated += tmp1;
}

if(j < img.ncol() - 1 && img(i,j + 1) > currentPixel && img(i,j + 1) < 1){
imgAtNewStep(i,j + 1) += tmp1;
dissipated += tmp1;
}

if( i > 0 && j > 0 && img(i - 1,j - 1) > currentPixel && img(i - 1,j - 1) < 1){
imgAtNewStep(i - 1,j - 1) += tmp2;
dissipated += tmp2;
}

if( i > 0 && j < img.ncol() - 1 && img(i - 1,j + 1) > currentPixel && img(i - 1,j + 1) < 1){
imgAtNewStep(i - 1,j + 1) += 1.0 / 36.0 * currentPixel;
dissipated += 1.0 / 36.0 * currentPixel;
}

if( i < img.nrow() - 1 && j > 0 && img(i + 1,j - 1) > currentPixel && img(i + 1,j - 1) < 1){
imgAtNewStep(i + 1,j - 1) += tmp2;
dissipated += tmp2;
}

if( i < img.nrow() - 1 && j > img.ncol() - 1 && img(i + 1,j + 1) > currentPixel && img(i + 1,j + 1) < 1){
imgAtNewStep(i + 1,j + 1) += tmp2;
dissipated += tmp2;
}

}

// add the non dissipated amount to the same pixel in next time-step
imgAtNewStep(i,j) += currentPixel - dissipated;
}
}

return imgAtNewStep;
}

After writing and saving this function, we have to generate the RccpExports.cpp RccpExports.R files, which basically contains a wrapper for our C++ function, which then after compiling the code can be loaded in R.
We can comfortably do this with:

Rcpp::compileAttributes()

Then we should roxygenation our package, so that the namespace and the documentation get updated. We do this with:

roxygen2::roxygenise()

Now finally we can recompile the whole package and load the new version. The simplest way imo is to just use Ctrl+Shift+B in RStudio.

## Performance comparison

Of we should also check, how much faster the new version is. I’ll use microbenchmark for this cause.

library(microbenchmark)
data <- matrix(runif(625), 25, 25)
microbenchmark(dissipatePixel(data, minimalTreshold = 0.05), dissipatePixelOld(data, minimalTreshold = 0.05))
## Unit: microseconds
##                                             expr      min       lq
##     dissipatePixel(data, minimalTreshold = 0.05)   70.891   72.368
##  dissipatePixelOld(data, minimalTreshold = 0.05) 4479.941 4622.600
##        mean   median        uq        max neval
##    95.23017   77.003   82.4445   1486.321   100
##  6514.79719 4677.444 4779.1125 161310.562   100

I think the result speaks for itself. This optimization was very worth while. The version implemented in C++ is about 60 times faster. That’s a lot. This could be the difference between one second and a minute. And of course longer calculations also suck more electricity, which means they have a larger CO2 fingerprint. So be nice to our atmosphere and code effective. 😉

That’s it for now.
Later,
David.

## Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under src/dissipatePixel.cpp.

Please follow and like us:

# Revisiting Dithering

It’s been a while since my last post and an even longer while since the last one about image processing. However I’ll just catch up, where I left off. The last post in this category was about image dithering. A common usage of dithering is the conversion of a grey scale picture to a picture consisting just out of black and white pixels, saving space and also useful for some print media. On a further note dithered images have a nice visual style, I would call something like dollar bill style.

Conventional Dithering methods however also introduce artifacts to pictures and of course they also lose sharpness. So there’s the question, if one could approve on those conventional dithering methods in order to reduce artifacts and increase sharpness.
If you’re mathematically minded, you might also be worried, that conventional dithering is not rotationally invariant, which I assume is due to the fact that the dithering kernel is not symmetric and the way one iterates over the picture. Rotationally invariant basically means, that the output of a method doesn’t change, when you rotate your input image, which is of course a desirable trait.

So what about taking some inspiration from physics? After all physical laws of motion are rotationally invariant. Lattice Boltzmann methods are a way of computing dynamics in fluids. And they have been successfully employed for dithering by Hagenburg et al . The paper regarding this research also served as my main source for the method and its implementation, even though my specific implementation might differ in some details.
I’ll call the method Lattice Boltzmann Dithering during the rest of the article, as I didn’t find any other handy name for it.

## The Idea Behind Lattice Boltzmann Dithering

The idea behind the method is to model the picture like a fluid with particles in it. In this case the grey values are the particles. A pixel that is black (with a grey value of 0) has so to say no particles in it. Those particles dissipate in the fluid over time. The time is modelled as discrete steps. They dissipate according to the following laws during each time step:

• If a pixel has a value v greater than 1, where 1 is the maximal grey value, (v – 1) values are distributed amongst the neighbors of the pixel and the value of original pixel becomes 1.
• If a pixel’s value v is lower than a minimal treshold, the user can set, v is distributed over its neighbors and the pixel itself becomes 0.
• Otherwise a fraction of the pixels value is distributed to each neighbor, that has a larger value, while being smaller than 1. This fraction is subtracted from the pixel itself.

I hope you see, how with this laws pixels with a high value (many particles) attract more values. After some time steps you should more or less only be left with black and white pixels this way.

And the whole process is stopped, when the difference from one time step to the next one becomes sufficiently small. And that’s about it… Not really that complicated, if you ask me! 🙂
Of course the paper goes more into detail about the theoretical background and proves some things about the method. If you’re interested, you should absolutely read it.

## Implementation of Lattice Boltzmann Dithering

And now let’s come to the part you all waited for… The implementation. First I’ll implement a method for the dissipation of the values from each pixel, which will e executed in each time step to get the next distribution of values.

One thing to keep in mind, when working with images is, that they have borders. And as you don’t want to dissipate the values across borders or worse access a part of the image, that isn’t there. So you have to treat the pixels at the borders differently. In the case of this method, I always check first, if a neighboring pixel is there or not.

dissipatePixel <- function(img, minimalTreshold = 0.01){

imgAtNewStep <- matrix(c(0), ncol = ncol(img), nrow = nrow(img))

for(i in seq(nrow(img))){
for(j in seq(ncol(img))){
if(img[i,j] > 1 || img[i,j] < minimalTreshold){

toDissipate <- img[i,j]

if(img[i,j] > 1){
toDissipate <- toDissipate - 1
}

dissipated <- 0

if( i > 1){
imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if( j > 1){
imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if(i < nrow(img)){
imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if(j < ncol(img)){
imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if( i > 1 && j > 1){
imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

if( i > 1 && j < ncol(img)){
imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

if( i < nrow(img) && j > 1){
imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

if( i < nrow(img) && j > ncol(img)){
imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

## add the non dissipated amount to the same pixel in next time-step
imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
}else{

dissipated <- 0
currentPixel <- img[i,j]

if( i > 1 && img[i - 1,j] > img[i,j] && img[i - 1,j] < 1){
imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if( j > 1 && img[i,j - 1] > img[i,j] && img[i,j - 1] < 1){
imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if(i < nrow(img) && img[i + 1,j] > img[i,j] && img[i + 1,j] < 1){
imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if(j < ncol(img) && img[i,j + 1] > img[i,j] && img[i,j + 1] < 1){
imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if( i > 1 && j > 1 && img[i - 1,j - 1] > img[i,j] && img[i - 1,j - 1] < 1){
imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

if( i > 1 && j < ncol(img) && img[i - 1,j + 1] > img[i,j] && img[i - 1,j + 1] < 1){
imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

if( i < nrow(img) && j > 1 && img[i + 1,j - 1] > img[i,j] && img[i + 1,j - 1] < 1){
imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

if( i < nrow(img) && j > ncol(img) && img[i + 1,j + 1] > img[i,j] && img[i + 1,j + 1] < 1){
imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

## add the non dissipated amount to the same pixel in next time-step
imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
}
}
}

return(imgAtNewStep)

}


Done that! Now the easy part…
But as the implementation in R with loops is incredibly inefficient I’ll just run 50 time steps this time. I will however implement this method at some later point in C++, where loops aren’t inefficient. This will also serve as a good example on how to integrate C++ code in R.

lbDithering <- function(img, epsilon, minimalTreshold){

i <- 0
while(TRUE){

imgAtNewStep <- dissipatePixel(img = img, minimalTreshold = minimalTreshold)

#if(norm(imgAtNewStep - img, type = "2") < epsilon){
if(i >= 50){
return(imgAtNewStep)
}else{
img <- imgAtNewStep
}
i <- i +1
}
}


## Usage

Now let’s reap the fruits of our coding and test the method on a picture I’ve selected. 🙂

birb <- imageBWFromJpeg("birb.jpg")
birbDithered <- lbDithering(birb@current, epsilon = 20, minimalTreshold = 0.05)
writeJPEG(round(birbDithered), "birbDithered.jpg")


Let me show you the resulting Jpeg:

Isn’t that beautiful? As comparison the original:

And a conventionally dithered version:

birbDithered2 <- errorDiffusiondDithering(birb, method = "mea")
writeJPEG(birbDithered2@current, "birbDithered2.jpg")


You can see more structure in the one created with Lattice Boltzmann Dithering, don’t you? And also you can better understand how light and shadows are distributed.
So overall a pretty nice algorithm I would say! Although I like the dollar bill look of the conventional one as well.

So that’s it for now! Until soon…

Yours, David

## Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

And well… You might’ve noticed, that I used some of my methods/classes a bit different than the last few times. Having some distance from coding this stuff I noticed I have to change some things about it. Although I don’t know yet, if I’ll make a blog post about it or not. Do you wanna read one?

Please follow and like us:

## Image Dithering

I showed you a lot of elementary stuff regarding image processing lately, so now it’s time to do something nice. What I will show you is called image dithering. But first let me give you a motivation for it

Let’s say you have a grayscale picture and you want it to paint with really only black and white. Therefore I will demonstrate you, what happens, if you just “round” the pixel values to black and white. For this purpose I will use a lot of functions I implemented in the previous posts. At first we have to load the picture.

library(Raspository)
imgColor <- imageRGBFromJpeg("Wagner.jpg")
plot(imgColor)


Next step is, that we have to convert it to grayscale. Remember the function I implemented in this post?

img <- imageBWFromRGB(imgColor, c(0.3, 0.59, 0.11))
plot(img)


Our eyes don’t treat all the colors equally, that’s why I choose this proportions for the colors to the grayscale. I took the values from this post on tutorialspoint.

Now let’s just quick an dirty convert this to completely black and white.

imgTmp <- img
imgTmp@current <- round(imgTmp@current)
plot(imgTmp)


Wow! We certainly lost basically all the information in the picture.

## Implementing Image Dithering

Before I show you how to implement a version of dithering I will shortly explain you the idea behind it. Let me ask you one question… Is there really such a thing as gray? Or how exactly would you define gray?
Quite simply as a mixture of black and white. Now think about colors and the screen, you’re probably sitting in front of. You have basically only three colors (RGB!). And each of the pixels on your screen consists of three sub-pixels, one of each of those three colors.
You perceive them not as individual dots, because their too close together for your eyes to distinguish them. Now’s the question, if we could something similar in this black and white case? And of course: Yes, we can! And it is called image dithering, which is by the way also applicable to colored images.

The idea now is that you iterate over all of your pixels ans apply to each of them still some kind of round function. But then you propagate the difference of the original pixel and the rounded pixels to its neighbors, that are still to be processed.
But of course there are also different methods in doing so. I’ll show you to of them today.

### Floyd Steinberg dithering

Let’s begin with the Floyd Steinberg Algorithm. I suggest you to read the corresponding Wikipedia article, as it is very straightforward.

And my implementation is also pretty straightforward.

floydSteinbergDithering <- function(img, transformPaletteFunction = round){
pixel <- img@current

n <- dim(pixel)[1]
m <- dim(pixel)[2]

for(y in seq(m)){
for(x in seq(n)){

oldPixel <- pixel[x,y]
newPixel <- transformPaletteFunction(oldPixel)

error <- oldPixel - newPixel

pixel[x,y] <- newPixel

if(x < n){
pixel[x + 1, y] <- pixel[x + 1, y] + error * 7/16
}

if(x > 1 && y < m){
pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 3/16
}

if(y < m){
pixel[x, y + 1] <- pixel[x, y + 1] + error * 5/16
}

if(x < n && y < m){
pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 1/16
}

}
}

ditheredImage <- new(class(img)[[1]], original = img@original,
current = pixel, operations = img@operations)

return(cropPixels(ditheredImage))
}


For the future some kind of Kernel function would be nice to be able to apply different kernels to pictures. But now let’s test it.

imgFS <- floydSteinbergDithering(img)
plot(imgFS)


That’s awesome! It almost looks like we had different gray values in our pictures. And there are just some minor artifacts introduced by it, meaning some appearent structures, that aren’t actually present in the original.

Now let’s try another method, which has a larger kernel.

### Minimized Average Error Dithering

This dithering method was introduced by Jarvis et al from the famous Bell Lab in 1976. So you see that this whole field is pretty old. And some of you might remember a time, where it was actually difficult to transmit data from one location to another. I still remember being a six year old child waiting minutes on the NASA homepage to load one picture of a stellar nebular. Today image compression is of course still important for things like dynamic homepages, especially if they are mobile friendly.

OK, now let’s come to the actual method. It is called minimized average error. And again the Wikipedia article on it is pretty good.

This time the neighborhood of your pixel is increased by a range of one. Let me show you the implementation…

minimizedAverageErrorDithering <- function(img, transformPaletteFunction = round){
pixel <- img@current

n <- dim(pixel)[1]
m <- dim(pixel)[2]

for(y in seq(m)){
for(x in seq(n)){

oldPixel <- pixel[x,y]
newPixel <- transformPaletteFunction(oldPixel)

error <- oldPixel - newPixel

pixel[x,y] <- newPixel

if(x < n){
pixel[x + 1, y    ] <- pixel[x + 1, y    ] + error * 7/48
}
if(x < n - 1){
pixel[x + 2, y    ] <- pixel[x + 2, y    ] + error * 5/48
}

if(x > 2 && y < m){
pixel[x - 2, y + 1] <- pixel[x - 2, y + 1] + error * 3/48
}

if(x > 1 && y < m){
pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 5/48
}

if(y < m){
pixel[x    , y + 1] <- pixel[x    , y + 1] + error * 7/48
}

if(x < n && y < m){
pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 5/48
}

if(x < n - 1 && y < m){
pixel[x + 2, y + 1] <- pixel[x + 2, y + 1] + error * 3/48
}

if(x > 2 && y < m - 1){
pixel[x - 2, y + 2] <- pixel[x - 2, y + 2] + error * 1/48
}

if(x > 1 && y < m - 1){
pixel[x - 1, y + 2] <- pixel[x - 1, y + 2] + error * 3/48
}

if(y < m - 1){
pixel[x    , y + 2] <- pixel[x    , y + 2] + error * 5/48
}

if(x < n && y < m - 1){
pixel[x + 1, y + 2] <- pixel[x + 1, y + 2] + error * 3/48
}

if(x < n - 1 && y < m - 1){
pixel[x + 2, y + 2] <- pixel[x + 2, y + 2] + error * 1/48
}
}
}

ditheredImage <- new(class(img)[[1]], original = img@original,
current = pixel, operations = img@operations)

return(cropPixels(ditheredImage))
}


You wanna see it’s effect, don’t you? Here you go…

imgMea <- minimizedAverageErrorDithering(img)
plot(imgMea)


Do you see the difference? I think we got rid of the artifacts! Isn’t that amazing? I really love how demonstrative image processing is.
But that’s it for today… See you soon!

## Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

Please follow and like us:

## Converting HSV To RGB

This post is now the sequel to this week’s post about color spaces. I think it will be a rather short post, where I just implement the conversion back HSV to RGB and then test my methods for demonstration.
In doing so I used the corresponding wikipedia post as a rough guideline.

So let me show you how I did it.

library(Raspository)
calculateRGBvalue <- function(H, C, X, m){
if(H >= 0 && H <= 1){
return(c(m + C, m + X, m))
}else if(H >= 0 && H <= 2){
return(c(m + X, m + C, m))
}else if(H >= 0 && H <= 3){
return(c(m, m + C, m + X))
}else if(H >= 0 && H <= 4){
return(c(m, m + X, m + C))
}else if(H >= 0 && H <= 5){
return(c(m + X, m, m + C))
}else if(H >= 0 && H <= 6){
return(c(m + C, m, m + X))
}else{
return(c(0,0,0))
}
}
require(abind)

## Loading required package: abind

hsvArrayToRgb <- function(hsvArray){

# Calculate the chroma
C <- hsvArray[,,3] * hsvArray[,,2]

H<- hsvArray[,,1] / 60

X <-  C * (1 - abs(H %% 2 - 1))

m <- hsvArray[,,3] - C
rgb<-mapply(FUN = calculateRGBvalue, H = H, C = C, X = X, m = m)

rgbArray<-abind(matrix(rgb[1,], nrow = nrow(hsvArray)),
matrix(rgb[2,], nrow = nrow(hsvArray)),
matrix(rgb[3,], nrow = nrow(hsvArray)),
along = 3)

return(rgbArray)
}

imageRGBFromHSV <- function(img){
return(new("imageRGB", original = hsvArrayToRgb(img@original),
current = hsvArrayToRgb(img@current),
operations = img@operations))
}


So far so good… That’s basically just an one to one implementation of the method from the Wikipedia article with the one difference being, that I add m beforehand and not later.

So let’s come to the fun part!

image <- imageRGBFromJpeg("Mountain.jpg")

## Loading required package: jpeg

plot(image)


What a ginormous mountain! But it isn’t even the tallest mountain in Saarland, believe it! Now we can also plot this as an black and white picture.

plot(imageBWFromRGB(image))


Then let’s convert to HSV and plot the S (saturation) and V (value/brightness) channel as black and white picture.

hsv <- imageHSVFromRGB(image)
plot(as.raster(hsv@current[,,2]))


How that looks like something from the 90s!

plot(as.raster(hsv@current[,,3]))


OK, and now compare this to the conversion to black and white from RGB. It’s a worse picture in my opinion, but in comparison to the other one it holds the accurate information about the lighting condition of each pixel. You see, how this could be useful in Computer Vision e.g.?
Let’s just say, you’re searching for something in your pictures, that absorbs a lot of light. Then this representation could be of use.

And last, but not least we have to convert the image back to confirm that we actually get back the original picture.

plot(imageRGBFromHSV(hsv))


Looks the same! I guess that concludes this blog post. And I already have some plans for the next few posts.
So, hopefully see you! 🙂

Please follow and like us:

## Introduction To Color Space

For now we have only looked at black and white pictures, but of course I know y’all are craving for me talking about color pictures! And that’s, what I will do today… After I visited a lecture about it.
What I will exactly do today is implement a S4 class for colored pictures with an RGB color space similar to one for black and white picture and then implement one with an HSV-color space and add corresponding transformation methods between the two classes.
So pretty basic stuff as well, but especially the HSV version could be pretty useful for manipulating pictures or further using them in Computer Vision procedures.

## RGB Color Space

Before we have only looked at pixels with a one dimensional value between zero and one.
In a more mathematical way, we looked at a function:

$$\mathbb{N} \times \mathbb{N} \rightarrow [0,1]$$

If this notation confuses you… Just ignore it! But in other words: The picture is a function of two natural numbered coordinates, that gives you back a real numbered value (an intensity) between zero and one.

If we want to do the same with colored images it is useful to define each pixel as having three dimension. One for Red, one for Green and one for Blue. Therefore the name RGB.
This way we would have a function:

$$\mathbb{N} \times \mathbb{N} \rightarrow [0,1] \times [0,1] \times [0,1]$$

Again in other words: The picture is a function of two natural numbered coordinates, that gives you back a tuple1 of three real numbered values between zero and one.
By the way this RGB color space is probably what you’re looking at right now, as computer monitors usually use it, but of course things in reality are more complicated. There are more than three wavelengths of light, but as our color vision system has only three different color receptors, a RGB color space is good enough for the human eye. But of course in scientific system, it could very well be, that you want to have more than three colors in a picture. Coming originally from Molecular Biology I know that there are many distinct wavelengths, that have some Biological importance. E.g. where certain molecules or functional groups of molecules absorb light.
So you could imagine using color spaces for higher dimension for some scientific applications.

But let’s return to the topic of RGB color spaces. In programming terms one way of implementing it would be with a three dimensional array, which is pretty straight forward.
So without further ado let me show you my implementation. It’s pretty simple, as the jpeg already returns the right kind of array, when reading a jpeg.

imageRGB <- setClass("imageRGB", slots=list(original="array", current="array", operations="list"))
imageRGBFromJpeg <-function(pathToJpeg){
require(jpeg)
image<- readJPEG(pathToJpeg)
return(new("imageRGB", original = image, current = image, operations = list()))
}
plot.imageRGB <- function(object){
plot(as.raster(object@current))}


## Converting To Black And White

Now I also want a method to convert an RGB image into a black and white one.

library(Raspository)
imageBWFromRGB <- function(img, chPortion = c(0.33, 0.33, 0.33)){
if(sum(chPortion) > 1){
stop("Channel portions mustn't add up to more than one.")
}
original <- img@original[,,1] * chPortion[1] + img@original[,,2] * chPortion[2] + img@original[,,3] * chPortion[3]
current <- img@current[,,1] * chPortion[1] + img@current[,,2] * chPortion[2] + img@current[,,3] * chPortion[3]
return(new("imageBW", original = original, current = current, operations = img@operations))
}


The parameter chPortion is a vector that indicates how much each channel from the original color space should contribute to the result black and white picture. As a matter of course its components aren’t allowed to add up to more than one.

## HSV Color Space

Another color space I want to show you today is the HSV. While you could imagine RGB is a cube, in the same analogy HSV would be a cylinder. With one polar coordinate H, which stands for hue. This coordinate determines the color at a given pixel.
The other two coordinates are S saturation and V the value (brightness) of a pixel. Those two coordinates are rational numbers between zero and one as before.

Now you could ask, why this representation could be useful and the answers are many. But let me just name you the editing of pictures as one of them. Let’s say you have a picture that is to dark, but the colors itself are OK. Then you could use this HSV color space for changing the value/brightness, while leaving hue and saturation the same.
But of course there are many other color spaces having other neat properties you might wanna exploit.
However instead of dreaming about the perfect color space, which feels like magic, let’s just implement this one, including the conversion from RGB.
We’ll also need some helper functions for it. And please, if you don’t understand something, just ask me in the comments. There should be no shame in don’t understanding something and I will be happy to give you an explanation to help you understanding it.

imageHSV <- setClass("imageHSV", slots=list(original="array", current="array", operations="list"))

calculateHUE <- function(max, maxIndex, min, r, g, b){
h <- 0.0
if(max == min){
return(h)
}else if(maxIndex == 1){
h <- 60.0 * ((g - b)/(max - min))
}else if(maxIndex == 2){
h <- 60.0 * (2.0 + (b - r)/(max - min))
}else if(maxIndex == 3){
h <- 60.0 * (4.0 + (r - g)/(max - min))
}

# if the value is negativ add 360° to it
if(h >= 0){
return(h)
}else{
return(h + 360)
}
}

rgbArrayToHsv <- function(rgbArray){
# get the maximal color and its index in each pixel
max <- apply(rgbArray, c(1,2), max)
maxIndex <-apply(rgbArray, c(1,2), which.max)
# get the minimal color in each pixel
min <- apply(rgbArray, c(1,2), min)

# calculate the hue for each pixel
h <- mapply(FUN = calculateHUE, max = max, maxIndex = maxIndex, min = min,
r = rgbArray[,,1], g = rgbArray[,,2], b = rgbArray[,,3])
# convert vector back to matrix
h <- matrix(h, ncol = ncol(max))

# calculate saturation
s <- (max - min)/max
# set values to zero, where max is 0 (division by zero -> NA)
s[is.na(s)] <- 0
# max is equal to v (value/brightness)
v <- max

# bind matrices together to array and return
require(abind)
hsvArray <- abind(h, s, v, along = 3)
return(hsvArray)
}

imageHSVFromRGB <- function(img){
return(new("imageHSV", original = rgbArrayToHsv(img@original),
current = rgbArrayToHsv(img@current),
operations = img@operations))
}


Phew, this took me more effort than expected. Luckily I feel comfortable using the various apply functions in R. They might be alien at first, but there’s a lot of neat stuff you can do, if you can use them.
But one side note… The calculateHUE function is very time consuming, because it is called a lot. So I think it would be worth while to re-implement it at some point of time in C++ to speed up things. Probably a good exercise for the future. I’ll keep you posted. 🙂

Anyway that’s enough for me today… David tired now. 😮 The conversion from HSV to RGB will probably come the next few days.
So, see ya!

## Availability Of The Code

And of course you can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageRGB.R and R/imageHSV.R.

Please follow and like us:

# And Writing Documentations In R

I wrote this function months ago, while writing the report for my seminar and I wanted to my make a post about this Bookmark-Coloring Algorithm ever since, but I never found the right chance. As I yesterday in the evening wrote some documentation for the Raspository, I thought it would be nice to make a post about this specific topic. Therefor I thought that using the Bookmark-Coloring Algorithm as an example would be nice.

## The Bookmark-Coloring Algorithm

This specific algorithm’s use is to generate a personal pagerank. In more detail this algorithm calculates given a starting website the chance to end up at other websites. But this idea is applicable to other fields as well. In my post about network-based integration through heat diffusion I showed you a similar method applied to a network of multi-omic data. On the same data-set you could use the Bookmark-Coloring Algorithm.
The basic idea behind the Bookmark-Coloring Algorithm is that you have some color that diffuses through a network, which is in my opinion equivalent to heat diffusing in a network. Correct me, if I’m wrong.

I implemented the algorithm following the paper about it by Pavel Berkhin. More precisely I implemented the Algorithm 2 from the paper. So let me show my implementation:

require(igraph)
require(dequer)

BCA<- function(graph, v, retentionCoefficient = 0.4, tol = 0.001){
# initialise vector of transition chances
p<-c()
p[V(graph)] <- 0

q <- queue()
pushback(q, v)

# initialise vector that indicates how much color is in one node
colorInVertex <- c()
colorInVertex[v] <- 1

# execute as long queque q has elements
while(length(q) > 0){
i <- pop(q)
w <- colorInVertex[i]
# use up the color in node
colorInVertex[i] <- NA

p[i] <- p[i] + retentionCoefficient * w

# if all color is used up continuew to next element in queque
if(w < tol){

next
}

# execute for all neighbors
for(j in neighbors(graph, i, mode = "out")){
if(!is.na(colorInVertex[j])){
# add color to neighbor
colorInVertex[j] <- colorInVertex[j] +
((1 - retentionCoefficient) * w/degree(graph, i, mode = "out"))
}else{
# initialise color in neighbor
pushback(q, j)
colorInVertex[j] <- (1 - retentionCoefficient) * w/degree(graph, i, mode = "out")
}
}

}

return(p)
}


I wrote some comments, that hopefully help you to understand, what’s going on. That’s also the first part of documentation. It’s the step you’ll probably do, while writing your code and in my opinion it’s always useful.

So are we done with the documentation? No. Not, if we want to this function into a package.

## roxygen2 documentation

roxygen2 is a nice package allowing you to write in-line documentation in R. I won’t go to much into detail about it here as there are lot of online sources for it1, but I will show you a short example, how to do it!

Now let me show, how to write a documentation for the BCA function, I will go over all the specified tags.

#' Bookmark Coloring Algorithm
#'
#' @aliases BookmarkColoringAlgorithm
#'
#' @description This function calculates a teleportation vector from a given
#' starting node to other nodes in a given network.
#'
#' @export BCA
#' @import dequer
#' @import igraph
#'
#' @param graph an object of type \code{\link[igraph]{igraph}}.
#' @param v a starting vertex from the above graph. Can be either its identifier
#' or a igraph.vs object.
#' @param retentionCoefficient the restart probability for each node.
#' @param tol a tolerance treshold, indicating what the smalltest value of color
#' is, that should propagate further
#'
#' @return a preference/teleportation vector
#'
#' @references \insertRef{Berkhin2006}{Raspository}
#'
#' @examples
#' library(igraph)
#' g <- make_ring(5)
#' preferenceVector <- BCA(g, 1)
BCA <- function(graph, v, retentionCoefficient = 0.4, tol = 0.001){


OK, let’s go over some of the tags…
In the first line of course you have the title of the function. Additional to the description, you can also add a details tag, where description should give a short overview over the method, you could include theoretical background in the details.
Then needed packages are imported. roxygen2 will convert them into lines in your NAMESPACE file.

With params and return you should shortly describe their types and what they should contain.

For the following use of the references tag, you also need to import the Rdpack package and include a “REFERENCES.bib” in the inst folder with the regarding BibTeX entries. In my opinion you should always use this, when implementing some method from some source… Be it a book or a paper. Rdpack imports those references automatically from your BibTeX file into the documentation files.

Last, but not least I included a runnable example. This is important to give your user a starting point on how to use your function, but furthermore it is a test for the function. Each time your package is built, example code is run. So you will be notified, if there are any errors.
But we will go more into automated testing another time. Because there is of course more you can do. But you should always write example code, if your function is visible to the user.

After writing this documentation in your file, you have to compile it by writing:

roxygen2::roxygenise()


I hope that this short example will help you writing and documenting your own functions! 🙂

## Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/BCA.R.

Please follow and like us:

# And Measuring Noise

Using the nomenclature developed in yesterday’s post I will today also implement a method for creating salt and pepper noise in images. This noise simulates dead pixels by setting them either to the lowest or highest grey value, in our case 0 or 1.

First let’s install and load the Raspository, where the methods and class developed yesterday are located in.

if (!requireNamespace("devtools", quietly = TRUE))
install.packages("devtools")
devtools::install_github("David-J-R/Raspository")

library(Raspository)


Without further ado let me show you how to implement the salt and pepper noise.

saltAndPepperNoise <- function(object, percentage = .2){
# select the indices to set to 0 or 1 at random
indices <- sample(length(object@current), length(object@current) * percentage)
# draw zeros and ones from a binomial distribution
values <- rbinom(length(indices), 1, 0.5)

object@current[indices] <- values
return(object)
}


OK, our animal guest test subject today will be an owl.

set.seed(1)
owl <- imageBWFromJpeg("owl.jpg")
plot(owl)


It looks surprised, doesn’t it? It probably knows about the salt and pepper noise we will add right away.

owlNoise <- saltAndPepperNoise(owl)
plot(owlNoise)


Uhm, this looks really annoying now. The other kind of noises hadn’t been that disrupting. Just like dead pixels on your screen, they make you feel bad.

## Introducing Two Measures of Noise

But what would be nice now as well, would be to have some measure of the noise. Let’s just use the mean squared error for it. I’ll implement a function that can compare two pictures or the current and original picture in an imageBW object depending on the input parameters.

MSE <- function(object, other = NULL){
if(is.null(other)){
errorMatrix <- object@original - object@current
}else{
errorMatrix <- other@current - object@current
}
squaredErrorMatrix <- errorMatrix ^ 2
return(mean(squaredErrorMatrix))
}


Nice! Now we finally have a measure for our noise. Of course this will be especially helpful later, when we want to evaluate the performance of reconstruction methods.
So let’s test it.

MSE(owlNoise)

## [1] 0.05545031

MSE(owlNoise, owl)

## [1] 0.05545031


As expected1 both function calls leave us with the same result.

Let’s also implement the peak-signal-to-noise ratio, which gives us back a decibel value. The lower the value the more noise you have in the picture. You can calculate it according to the following formula, which I will also “simplify” a bit.

$$PSNR(x, y) = 10 \cdot log_{10} \left( \frac{MAX^{2}}{MSE(x, y)} \right) \$$

I hope you know the logarithm rules. 😛

$$PSNR(x, y) = 10 \cdot log_{10}(MAX^{2}) – 10 \cdot log_{10}(MSE(x, y)) \$$ $$PSNR(x, y) = 20 \cdot log_{10}(MAX) – 10 \cdot log_{10}(MSE(x, y))$$

Where x and y are the two pictures to compare and MAX is the maximum value a pixel can take on.

And for MAX being one in our case we get:

$$PSNR(x, y) = – 10 \cdot log_{10}(MSE(x, y))$$

Subsequently, it’s now time to implement this measure as well:

PSNR <- function(object, other = NULL){
mse <- MSE(object, other)
return(-10 * log(mse, base = 10))
}


That was pretty straightforward. So let’s test it!

PSNR(owlNoise)

## [1] 12.56096


It’s a relatively low value, that indicates a lot of noise. In my opinion this value is more intuitive, probably because of use of the logarithm function. Indeed the MSE value alone didn’t seem that big.

But that’s it for now. There will be probably further stuff next week. But probably not that much.
Thanks for reading and have nice remaining weekend. 🙂

## Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageNoise.R.
But as I will expand the code, it will also probably grow there.

Please follow and like us:

# Let’s Add Some More Noise

Like in my yesterday’s post mentioned, there are more kinds of noise. So let’s first load today’s picture and then add some multiplicative noise!

library(jpeg)
pigeon<- readJPEG("pigeon.jpg")
pigeonBW <- pigeon[,,1]
plot(as.raster(pigeonBW))


Oh look it’s a pigeon on a lantern. I wonder what this little guy was thinking? But let’s not dwell on this. Rather let’s think how we can scramble up this picture!
I will first try multiplicative normal distributed noise, which is pretty straight forward, if you followed yesterday’s post.

set.seed(1)
pigeonNormalNoise <- pigeonBW * (1 + rnorm(length(pigeonBW), sd = sd(pigeonBW)))
pigeonNormalNoise[pigeonNormalNoise > 1] <- 1
pigeonNormalNoise[pigeonNormalNoise < 0] <- 0
plot(as.raster(pigeonNormalNoise))


Can you see the difference to the additive noise from yesterday? Indeed the darker areas of the pictures seem nearly untouched from the noise. That’s because the intensity of the noise is now dependent on the intensity of the pixel. And intensity is what dark pixels are lacking… So to speak.

But it’s really annoying and redundant to have to write the same stuff over and over again, isn’t it? So let’s do, what a Computer Scientist is best in. Let’s generalize some methods.
But where to start?

In my view it would be useful to have some kind of an (black and white) image class, which saves the image and operations on it. Hence the process would also be reproducible, which is always nice.

imageBW <- setClass("imageBW", slots=list(original="matrix", current="matrix", operations="list"))
imageBWFromJpeg <-function(pathToJpeg){
require(jpeg)
image<- readJPEG(pathToJpeg)
imageBW <- image[,,1]
return(new("imageBW", original = imageBW, current = imageBW, operations = list()))
}
plot.imageBW <- function(object){
plot(as.raster(object@current))}


As an illustration I also overwrote the default plot function. This makes our lives even easier. In the next step I’ll implement the different noise functions. There are probably more generalized ways of doing so, but for now this will suffice. I also haven’t come up with a good way of storing the done operations yet. So I’ll probably also do that another day.

cropPixels<- function(object){
object@current[object@current > 1] <- 1
object@current[object@current < 0] <- 0
return(object)
}
addUnifNoise <- function(object){
slot(object, "current") <- slot(object, "current") +
runif(length(slot(object, "current")), min = -1, max = 1)
return(cropPixels(object))
}
addNormalNoise <- function(object, sd = NULL){
if(is.null(sd)){
object@current <- object@current + rnorm(length(object@current), sd = sd(object@current))
}else{
object@current <- object@current + rnorm(length(object@current), sd = sd)
}
return(cropPixels(object))
}
multiplyUnifNoise <- function(object){
object@current <- object@current * (1 + runif(length(object@current), min = -1, max = 1))
return(cropPixels(object))
}
multiplyNormalNoise <- function(object, sd = NULL){
if(is.null(sd)){
object@current <- object@current * ( 1 + rnorm(length(object@current),
sd = sd(object@current)))
}else{
object@current <- object@current * ( 1 + rnorm(length(object@current),
sd = sd))
}
return(cropPixels(object))
}


For the future I would also like to have this working with infix operators. Meaning that I could do stuff like image <- image + normalNoise(...) or image <- image * (1 + normalNoise(...)) so that I have a neat little grammar for working with images. However for the moment those functions will do the job. Now let us make use of the newly implemented methods and add a lot of noise to the picture.

image <- imageBWFromJpeg("pigeon.jpg")
image <- addNormalNoise(image)
image <- multiplyUnifNoise(image)
image <- addUnifNoise(image)
plot(image)


Haha, there’s not much left of this poor pigeons. But you can still see its and the lamp’s outlines. And I’m anyway quite certain that there are algorithms out there, that could reconstruct a lot of the original image. You will see that later on. 🙂

## Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageOneChannel.R.
But as I will expand the code, it will also probably grow there.

Please follow and like us: