My Humble Beginnings As A Programmer


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

Often when I tell people I haven’t really started programming till my time at the university, they won’t believe me. And while I performed well at programming tasks from the beginning of my life as a Bioinformatics student, this is mostly true. But I’m not here today to talk about that, I’m here to talk about the other half of the truth. The programming I did before I was a student or in other words my humble beginnings as a programmer

The Schooldays

Well. As I’m a little bit older, there hasn’t been much programming during my schooldays. Actually, the only time we programmed sometime was in ninth grade in our math class. We had an old teacher, who was an early computer enthusiast. And so when we were done with the subject matter for the year, we did some programming in Turbo Pascal. If you don’t know, Turbo Pascal is an out of fashion programming language, that as far as I know was developed for teaching purposes.

Although we didn’t do great things in it, we at least learnt some concepts. As far as I remember we just had to implement a program, that printed out all the primes from one to one hundred. Because I was done with it pretty quickly, I brought some programming books from my dad to class. Those books contained code for computer games written in Turbo Pascal. Yep… That was actually a thing in the 80’s to 90’s! But I didn’t get far. Do you know how bothersome it is to copy paste pages of code the old school way?
So yep… That wasn’t really my beginnings as a programmer. I mean, I learnt something, but it didn’t really get me motivated to continue learning on my own. And a little bit of time passed till my next attempt.

Beginnings As A Programmer On My Own

When I was about eighteen, the story continued. I was just about finishing middle school, having my first girlfriend and so on. Besides that I played the MMORPG World of Warcraft. But while I started playing on the official servers, at some point of time I started playing on a private shard called Lordearon. For the Warcraft fans amongst you: NO that was not a writing error at my cost. That was actually the name of the server. Playing there it didn’t take long for me to become a gamemaster (GM). For your information, Lordearon was run as a project of some students. So they didn’t have much time and resources and therefore needed help in managing the server.

During this time I learnt a little bit of PHP, which was used in conjunction with SQL to run the server. And to fix bugs it was useful to able to write small scripts. I didn’t really do this for long, but still I learnt some stuff mostly on my own. And that some computer science students trusted me enough to give me GM and even some admin rights, gave me self confidence.

Self developed picture of me, when I was nineteen.

Just Playing Around

After some time had passed I got the idea, that I had to learn C. Well, it was shortly after I broke up with my then-girlfriend. So I had a lot of newly gained free time and this wasn’t my only project I started during this time. I found a nice online tutorial for learning C back then, which I followed a bit. And then I programmed some easy programs for doing some calculations for the laboratory… Nothing special, just simple arithmetic.

Unfortunately I didn’t follow those beginnings as a programmer furthermore. But even if I did only the basics and no algorithms and stuff, I think I laid down a valuable foundation. Then came my second-chance education, where I did my high school degree. During this time I was used to capacity with school. However I improved my math capabilities a lot along the way, which is important for programming.

Before My Student Life

After I finished my high school degree (Abitur) I had some months of free time until my time at the university began. And luckily I choose to spend some of it on online courses. While some of them were about statistics and maths in general, others were about programming basics. And I noticed that I should’ve started with something like that much earlier, because I liked it. Again, I didn’t learn any hard concepts, but those basics helped me to focus on the hard problems later at the university.

So this is also, what I would advise you, if you’re planning on studying a computer science related program. You don’t need to know the hard concepts, but most lectures won’t give you the time to understand the basics of programming like loops and conditional statements. Additionally, it is also of advantage to know how to read and write data. That’s something I didn’t learn before uni. While those are rather easy concepts, if you compare them to other ones you learn as a computer scientist, if you have never done such a thing, you could struggle a bit at first.

Nowadays there’s really this big advantage, that there are so many for free resources out there, that can help you to learn programming on your own. So there’s really no reason not to do so. And even if you just play around a bit, like I did, it will help you to face future challenges. Just always keep in mind, that it’s important to be comfortable with the basics in programming to solve harder problems. Everybody has to start small and if you do it for yourself, you don’t need to be perfect. Also nobody is perfect! 🙂

I hope you enjoyed this little story about my beginnings as a programmer. If you need some advise on this topic of starting to learn programming, you can just write me. So that I can make posts about topics, that might help you.
Until then, have a good one!

The Fourier Spectrum Of An Image


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

Decomposing An Image Into Frequencies

The Fourier transform is a useful tool to disassemble a signal into its component frequencies. Amongst others it has applications in signal processing, quantum mechanics an even in Bioinformatics for the fast calculation of multiple alignments(10.1093/nar/gkf436).
So it’s no wonder, that it is also used in image processing. It is e.g. often used for the design of filters and for image convolutions in general, as it makes it possible to calculate those pretty fast. It can furthermore be used for the elimination of regular noise in images. But those are both topics for another time.


Today I will show you how to calculate the Fourier spectrum of an image, which is in any case useful to visualize what the Fourier transformation actually does. And you can also use it to investigate repeating features in your image. And from there it’s just one step further for using it for denoising an image.

How To Calculate It

I won’t got much into the theoretical background here right now, but I will give you the means to calculate it. If you want to get more intuition how the formulas for the Fourier transform are derived, I recommend you watching the following video of 3Brown1Blue.
To understand this part you should also at least know a little bit about the concept of imaginary numbers. But maybe I’ll do a quick tutorial in them at some point later. Just let me know, if you’d like to see that.

So what we actually need is the two dimensional discrete Fourier transformation, because pictures of course are two dimensional. Instead of discrete Fourier transformations there are also continuous ones, which can be applied to continuous functions.
So let me just give you the equations:

\(\hat{f}(p,q) = \frac{1}{\sqrt{n \cdot m}} \cdot \sum_{m = 0}^{M – 1}\sum_{n = 0}^{N – 1}f(n,m) \cdot e^{-\frac{i \cdot 2 \cdot \pi \cdot p \cdot m}{M}} \cdot e^{-\frac{i \cdot 2 \cdot \pi \cdot q \cdot n}{N}}\)

In this case N is the number of rows in the image and M the number of columns. The image itself is represented as the two dimensional function \(f(n, m)\) and \(\hat{f}(p,q)\) is the Fourier transformed image, where p and q are the new variables for indicating row and columns. And \(i\) is the imaginary unit. It’s a solution to the equation \(x^2 = -1\), which cannot be solved by any real number.

Now the value of each pixel is a complex number. Do you have any idea how we could visualize this in a sensible manner? Well… Complex numbers are basically just coordinates in a two dimensional space. So we could just take their absolute distance to the origin point in their coordinate system. Applying this to every pixel gives us the so-called Fourier spectrum of an image.

In the next parts I will step by step show you how to implement all of this and do some improvements on it. Luckily for us we don’t need to implement the Fourier transform ourselves. In R it is already contained in the form of the fft function and many other programming languages provide you with high performing implementations of it. Even though implementing the Fourier transform itself isn’t that big of a deal, performance could be. But maybe I’ll do some home-grown implementation of it at a later point in time.

Some Helper Functions

First we’ll need some helper functions. I was thinking about giving some of them their own blog post. But then again they’re not that interesting, so they would warrant that. We will also use some helper functions developed in my last post.

To make our life easier, a min and max function for images for be a nice, returning us the min and max grey-values of an image. The implementation is straight forward:

min.imageOneChannel <- function(object, na.rm = FALSE){
    return(min(object@imageMatrix, na.rm = na.rm))
}
max.imageOneChannel <- function(object, na.rm = FALSE){
    return(max(object@imageMatrix, na.rm = na.rm))
}

Next I’ll introduce the first of many point operations. Point operations change the value of one pixel independent of neighboring pixels. As the Fourier spectrum of an image might have values below zero or above one at certain pixels, we will need to rescale it so that it is inside the boundaries again. We can to this by with the following function:

affineRescale <- function(img){
    a <- max(img) - min(img)
    b <- min(img) / a

    return(affineGreyscaleTransformation(img = img, 1.0/a, -b))
}

After applying this function the minimal value will be equal to zero and the maximal equal to one and all other values will be scaled between them.

The Fourier Spectrum

So let’s implement the calculation of the Fourier spectrum of an image. You’ll see, that it is pretty straight-forward…

calculateFourierSpectrum <- function(img){

    imgFTrans <- fft(img@imageMatrix)

    ## calculate the eiclidean norm of the complex coordinates
    fourierSpectrum <- new("imageOneChannel", 
                           imageMatrix = sqrt(Re(imgFTrans) ^ 2 + 
                                              Im(imgFTrans) ^ 2))

    return(fourierSpectrum)
}

Basically just two/three (depending on if you count the return) lines of code and the nice thing about the native fft function in R, is that it keeps the dimensions of the matrix.

Now let’s load our today’s image

library(Raspository)
fence <- imageOneChannelFromJpeg("Fence.jpg")
Picture of a fence in Camphausen, Saarland.

It’s a fence, which has a lot of repeating structures. So it should give us some interesting result.

fftImg <- affineRescale(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft1.jpg")
Fourier speactrum image transformation after affine rescaling

Huh?! What’s that? Nothing? Well, you can see a bit in the corners, but not much.

The Fourier Spectrum Scaled With The Logarithm

To make the spectrum better visible, we need to rescale it in a different manner. You remember, that the equation for the Fourier transform contains the exponent function? So what would probably be sensible would be to take the logarithm to rescale the image. But remember… The logarithm for smaller or equal to zero is not defined. So it’s a good idea to add a one before scaling with the logarithm. The log1p function in R will already do this for us.

logarithmicDynamicCompression <- function(img, c = NULL){
    if(is.null(c)){
        c <- 1 / log1p(max(img@imageMatrix))
    }

    return(new("imageOneChannel", imageMatrix = c * log1p(img@imageMatrix)))
}

So let’s try it again.

fftImg <- logarithmicDynamicCompression(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft2.jpg")
Fourier speactrum image after logarithmic rescaling

Now that’s the Fourier spectrum of the fence. But there’s something not so nice about it. The origin of the spectrum is not in the middle, but on the borders. This is a certain property of the discrete form of the Fourier transformation. In the continuous one the origin lies in the middle.

Shifting The Fourier Spectrum Of An Image

But lucky for us there’s a quick fix for this minor problem. We just need to multiply the image with an image of the same size first. This second image contains one and minus ones arranged in a checkerboard like pattern. I won’t show you here, why this is so… But the Fourier transformation has many neat properties you can use, so that I’ll probably do a post about them in the future too.

For creating the checkerboard sign matrix I use data.table magic. Also probably doing a post or more posts about this awesome data structure in the future, as it makes of the life of a data scientist much easier.

library(data.table)
calculateFourierSpectrum <- function(img){


    ## generate checkerboard sign imageMatrix to shift it in the fourier domain
    DT <- CJ(1:nrow(img@imageMatrix), 1:ncol(img@imageMatrix))
    DT[, sign := (-1)^(V1 + V2)]
    shiftMatrix <- new("imageOneChannel", imageMatrix = as.matrix(dcast(DT, V1 ~ V2, value.var = "sign")[,-1]))

    shiftedImg <- img * shiftMatrix

    imgFTrans <- fft(shiftedImg@imageMatrix)


    ## calculate the eiclidean norm
    fourierSpectrum <- new("imageOneChannel", 
                           imageMatrix = sqrt(Re(imgFTrans)^2 + Im(imgFTrans)^2))

    return(fourierSpectrum)
}

Finally it’s time to also test that!

fftImg <- logarithmicDynamicCompression(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft3.jpg")
Fourier speactrum image after logarithmic rescaling and shifting

And voila! Now I’m satisfied. 🙂
By the way the Fourier transform flips the direction of patterns. So the big vertical stripe could be the horizon. But it could also be a boundary artifacts… Something I’ll show you in more detail later. And also how to fix it.
All the stripes in the others direction represent frequencies across the picture. And what’s nice… You can change features of the image here in the Fourier space and then just as easily back-transform it.

You’ll definitely hear more from Fourier transformations from me. I hope you enjoyed this post. I’d love to hear from you.

Availability Of The Code

You can access a maintained form of the code in my GitHub repository Raspository under R/imageOneChannel, R/imagePointOperations and R/imageFeatures.

Bibliography

Implementation Of Arithmetic Image Operators


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

Or More Object Oriented Programming (OOP) in R

As I told you in one of my recent posts I implemented a lot of functions related to image processing and built a lot on my image processing framework in the R programming language. Changes I did include methods I learned in the lecture, e.g. convolutions, filters of different kinds and calculation of statistical features. But I also did some code refactoring and included some new helper methods.
A set of those helper methods will be the topic of this blog-post. I’m talking about arithmetic image operators. We will need those a lot in later exercises. So it’s useful to have them implemented in an easy-to-use way.

So the basic idea ist just to overload the different arithmetic operators to make them useable with images of the type imageOneChannel1.
It should be possible to use the functions either with two images or one image and one scalar.

The is. Function

As a little helper we will implement the is.imageOneChannel function. Those functtions with the prefix is are used in R to test if any object is of a given type. It doesn’t save us much time from using the inherits function from the R base, but it makes the process at least somewhat more comfortable.
To be honest what I’ve implemented is just a glorifyed wrapper around the inherits function, but that’s generally how it is done. Of course that leaves us also with the possibility to write a little documentation using roxygen2. So here we go:

#' Is It A One Channel Image
#' 
#' @description Checks if an object is of type \code{\link[Raspository]{imageOneChannel}}.
#' 
#' @export
#'
#' @param x any R object.
#'
#' @return \code{TRUE}, if the object is of type \code{\link[Raspository]{imageOneChannel}}.
#'  \code{FALSE} otherwise.
#' 
#'
#' @examples
is.imageOneChannel <-function(x){
    return(inherits(x, "imageOneChannel"))
}

So, that was easy. And it’s not going to get any harder this post actually.

Arithmetic Image Operators

Overloading an arithmetic operator in R is pretty simple as well. Take a look at the following:

`+.imageOneChannel` <- function(A, B)

That will be the function header of our plus operator. Let’s analyse it bit by bit.
First the function name is sorrounded by backticks. That’s the way, how you define so called infix operators2, as they contain special symbols, which function names normally can’t.


Then we have the plus sign and after the dot the class name imageOneChannel, which indicates that this is the plus operator for this specific class. Later you don’t call this function as +.imageOneChannel, but just as plus. And as far I noticed it this special operator is always called, as long as one of the two parameters is of type imageOneChannel. But actually I don’t know what would happen, if the other summand would be of any other class, that also has its own plus operator defined. Anyone an idea? But I’ll probably test that at some point of time.
And last like in any other function <- function(A, B) assigns the function to the function name. As a plus function this contains just two parameters.

Now let me show you my full implementation of this arithmetic image operator with documentation:

#' Image Addition
#' 
#' @description Addition of one image with another image or a scalar. At least 
#' one of the two parameters has to be an image. If both are images, the operation
#' is executed entrywise.
#' 
#' @export
#'
#' @param A An image of class \code{\link[Raspository]{imageOneChannel}} or 
#' a scalar.
#' @param B An image of class \code{\link[Raspository]{imageOneChannel}} or 
#' a scalar.
#'
#' @return The resulting image of class \code{\link[Raspository]{imageOneChannel}}.
#' 
#'
#' @examples
`+.imageOneChannel` <- function(A, B){
    if(is.imageOneChannel(B)){
        if(is.imageOneChannel(A)){
            return(new("imageOneChannel", imageMatrix = A@imageMatrix + B@imageMatrix))
        }else{
            return(new("imageOneChannel", imageMatrix = A + B@imageMatrix))
        }

    }else{
        return(new("imageOneChannel", imageMatrix = A@imageMatrix + B))
    }

}

Pretty easy, isn’t it? We just need to check which one of the two parameters is an image and then access its matrix slot.
I mean for now our imageOneChannel also isn’t much more than a wrapper around a standard matrix in R. But that’s ok, because we definitly want to have some different behaviour defined later on than in a standard matrix. And We also need some methods, that not necessarily make sense for matrices containing other things than pixels.
Some people might know, that I’m not the biggest proponent of OOP and I actually think it’s overused. But if you have some data type, that should express some specific to it behaviour, it’s sensible. Just don’t implement a convoluted inheritance hierarchy with lots abd lots of stuff, you’ll never need and people need at least a PhD to understand. Keep it simple, stupid!

The other arithmetic image operators are implemented in the same manner. I’ll spare you the documentations this time, as they’re essentially all the same, but you can find them in my repository:

`-.imageOneChannel` <- function(A, B){
    if(is.imageOneChannel(B)){
        if(is.imageOneChannel(A)){
            return(new("imageOneChannel", imageMatrix = A@imageMatrix - B@imageMatrix))
        }else{
            return(new("imageOneChannel", imageMatrix = A - B@imageMatrix))
        }
    }else{
        return(new("imageOneChannel", imageMatrix  = A@imageMatrix - B))
    }

}
`*.imageOneChannel` <- function(A, B){
    if(is.imageOneChannel(B)){
        if(is.imageOneChannel(A)){
            return(new("imageOneChannel", imageMatrix = A@imageMatrix * B@imageMatrix))
        }else{
            return(new("imageOneChannel", imageMatrix = A * B@imageMatrix))
        }
    }else{
        return(new("imageOneChannel", imageMatrix = A@imageMatrix * B))
    }

}
`/.imageOneChannel` <- function(A, B){
    if(is.imageOneChannel(B)){
        if(is.imageOneChannel(A)){
            return(new("imageOneChannel", imageMatrix = A@imageMatrix / B@imageMatrix))
        }else{
            return(new("imageOneChannel", imageMatrix = A / B@imageMatrix))
        }
    }else{
        return(new("imageOneChannel", imageMatrix = A@imageMatrix / B))
    }

}

Testing The Arithmetic Image Operators

library(Raspository)

We can now do lot’s and lots of things with those operators. So first let’s load our test image…

kadser <- imageOneChannelFromJpeg("kadser.jpg")

Without looking at the original image let’s take a look at the negative of the picture.

negativeKadser <- 1 - kadser
writeJPEG.imageOneChannel(negativeKadser, "negativeKadser.jpg")
A negative of cat chilling in a weird position.

Spoopy, isn’t it?

And now let’s add this picture together with another one. We’ll have to average them, so that we don’t exceed the maximal grey-value.

owl0r <- imageOneChannelFromJpeg("owl0r.jpg")
addedImages <- (kadser + owl0r) / 2
writeJPEG.imageOneChannel(addedImages, "addedImages.jpg")
A cat chilling together with an owl.

Well, this added image won’t win us any contests, but at least we can say now, that we can add images together. That will be valuable later. You’ll see.

That’salso it for today. See you soon!

Availability Of The Code

You can access a maintained form of the code in my github repository Raspository under R/imageOperations.R and R/imageOneChannel.R.

Our Relationship To Failures


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

So in my last post I talked about the positive side of failures. Shortly after posting it I however noticed, that I forgot about the other half of the story. While being self-contained, this status at least cried for something like an addendum.
While the last post was more focused on the personal level, this one will take a look at the bigger context, namely the relationship to failures in our society.

As a little disclaimer I have to add that this post is just my opinion on this topic at the current point in time, based on experiences and tales of other people. So it will be pretty subjective.

Back In School

Some people may know that I didn’t necessarily have the best time back in school. And part of that had also to do with failures.
So let me first ask you, what happened to you, when you failed in school? Well… I especially remember one situation back in fifth or sixth class, where I failed to solved a math equation on the black board and I was called stupid by the teacher. And that wasn’t an exception. Stuff like that happened a lot in my school and not only to me. Of course there were also teacher that didn’t do that. But in my opinion that is something that should never happen. Teachers also have a pedagogic mandate. And calling someone stupid isn’t pedagogic. It just helps to develop an unhealthy relationship to failures.

What a teacher in fact should say, is something along the lines: “You can do better!” And some teachers, I would call them the good teachers, did this.
But I guess at least I was lucky that my parents never punished my for failures in school.

Later In Life

Picture taken at the Cloef-Pfad: Sometimes you have climb a small, unsecured path to get to a nice view with the risk of falling down.

So what’s after school… E.g. in university, vocational training and job life? I would say, that failures are mostly something we try to avoid, because the stakes are often very high.
Let’s get back to my recent failure in university. If I won’t pass the re-exam, it isn’t just an academic failure for me. It might also cause me financial troubles. Not passing the re-exam would probably mean an extension of my time as a student, which in itself is not the worst thing. But as I’m dependent on BAföG (federal support for students) and I won’t get that anymore after my fourth semester, I’ll have to see, how I will pay for my life after that.


And yes… I’m also already working, nine hours per week at the moment. But that’s not enough for rent, other bills and food together. Well, I could work more. But my time is limited. More hours of work per week also mean less time for doing homework and learning. You see the catch-22?
Right now I’m saving money for the probably one semester, where I won’t get the BAföG anymore. This could work. But I’m kind of damning myself for picking hard courses at the university instead of ones, where I already know most of the stuff and the chance of failure would be much lower.

What I’m trying to say, is that failure isn’t so “nice” anymore, if it’s actually coupled to your survival, today your ability to pay your bills. And I don’t say, that failures shouldn’t have its consequences, but sometimes not avoiding failure is a luxury you don’t have.
And I think it’s always a bit cynical when romantic stories about failure come from people in very privileged positions. That doesn’t mean, that their stories are wrong, it just means, that different people have different relationships to failures depending on their personal circumstances.

How We Treat Failures In Others

The most damning thing however might be our relationship to failures of other people and not our own. Of course the example earlier with teachers in school also fits into this box. It’s anyway always easier to judge other people than oneself.
And while it’s OK to relate to other through failures, I don’t think it’s nice behavior to just wait for the failure of others to then be smug about it.

Furthermore this does not only apply to private, but also public persons. Of course we have politicians that practice Orwellian doublethink, because otherwise… If our politicians would really say stuff, instead of mostly prating, we would lynch them in a allegorical sense. Especially recent years however also have shown, that the creation of an alternative world can be a successful endeavor in avoiding “failure” instead of just saying nothing.

And it’s the same with our entertainment industry. Let’s take the Star Wars as an example. The original movies of course took elements from preexisting works, but they were something new… Something risky. George Lucas with “Star Wars: A New Hope” could have failed back then. But he didn’t… And now Star Wars is an integral part of our culture. I think that’s not a controversial thing to say…

The prequels however kind of failed. They didn’t fail in every way. I mean they’ve been a financial success. But a lot of fans were disappointed, because of certain problems the films had. I won’t go into detail about them here, because I think a certain group of armchair critics did a much better job doing so than I ever could. But at least the prequels tried something new. But then came episode seven, which was just a rehash of the very first movie. You could also call it a soft-reboot. The movie wasn’t bad, but it also didn’t try anything new. Of course it was in general a good movie. But is this a surprise as it is based on another good movie? The better question is, how it compares to its template. And you can answer that for yourself.


My conclusion is, that soft reboots like that in the first case happen, because they decrease the risk of failure. The makers are doing something that is already proven. And it’s no wonder, that movie companies mostly produce safe, than risky movie. Because they’re first of all financial companies, that have to sustain them and their shareholders.

Conclusions

Phew, now I veered away a lot from my original point of failing an exam in my first post. But this is how my train of thought works. I however think that all of those examples show, that failures aren’t always benevolent teachers in reality, but that this has mostly to do with our relationship regarding them.
And while we could change certain points, others, like companies trying to avoid losing money, might just be the way things are.

This is however only my subjective view and I would love to hear about your opinions and experiences.
And I think we can only change our relationship to failures, if we understand, what they mean for other people.

See ya!

The Importance of Failure


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

And To Admit Those Failures

If it’s in the subtitle, let’s start with admitting something right away: I didn’t
pass the exam for IPCV, being the lecture this semester, which I really liked and wrote multiple blog posts about.

Therefore I’ve been kind of down for a few days, which is, I think, understandable. And at first it was kind of hard for me to think about the importance of this failure.

But I coped with those feelings instead of running away from them, i.e. just distracting myself from them. And I have to admit that I usually distracted
myself, when I had a failure in the past. But you know… Dealing with
uncomfortable emotions is important. I bet some of you might be further than I am doing this. But it’s never to late to evolve your personality.

What’s The Importance Of Failure?

To come back to the title on might ask, what the importance of failure is.
And I think that right answer would be something along the lines, that you can learn from them. So what did I learn from my recent failure? Well… That my way of learning wasn’t suited for the exam. Instead of practicing the problems, we did during the semester, which I should have done, I tried to understand everything.
And I implemented a lot of the stuff, we did in this lecture in doing so 1.
I mean I learnt a lot… But I actually avoided practicing for what would be asked in exam, because it felt uncomfortable and boring to me.

Admitting that I didn’t do, what I should have done, isn’t easy. It would be
much easier to invent some kind of narrative, where I was unjustly rated or where just everything is stupid.
I mean… I would like tests to be different to be honest! But in this case, I failed. That’s the plain truth.

Anything Else?

On a further note I also think that failures and not successes are, what shape your personality the most. We all fail sometimes. That’s just in our nature as limited human beings. Of course there are humans, that are truly brilliant, but even they have to sleep and only have a limited amount of time.

So what you at lest can do is to claim your failure, instead of denying it and sort of make the best out of it.

And that’s actually pretty important, because if you just suppress your failures and the memories thereof, they will just come back and haunt you. And in the worst case you will do the same mistake over and over again, preventing you from evolving your personality in a new direction.

Trial And Error

Failures also always make me think of my vocational training, which I did many years before. I was trained as a biological technical assistant at the Chemistry school Dr. Erwin Elhardt… I had a really good time there and besides learning Molecular Biology I also learnt there how to organize myself… amongst many other things.
And I remember my favorite teacher there always telling is, that “trial and error” is what you’re doing in the laboratory. And in fact I think, that trial and error is something we’re doing all our lives. It’s one of the two big principles of learning… The other one being imitation. And it’s the way to go, when you’re faced with a new situation, where conventional wisdom fails.

That’s a picture of a young, long-haired me looking through a microscope. Credits for this picture go to my then teacher DrMartin Elhardt, who was one of the people that inspired me in becoming a hobby-photographer myself.

And that’s also actually the reason why I sometimes, in some posts in this blog first show a few mistakes I made or things I tried before I reached my final conclusion. So that you don’t have to do the same mistakes!
And also to show you, that I often don’t immediately get to the right conclusion.

Conclusion On Failures

So what’s left to say? Claim you failures for yourself and own them. Otherwise they will own you 2.

And for me personally I gotta learn for the re-exam for the IPCV lecture. 🙂
I have now about two months time and I already made kind of a fight plan.
If I have the time I’ll also make some blog posts about what I’m learning.
Not necessarily code, but maybe some howtos on typical problems from the lecture.
And I’ll definitely make blog posts about what I’ve already implemented. So overall I’m optimistic I’ll make it in the re-exam!

Have a nice evening!

Your Own Code Toolbox


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332


It’s Important

Well… You might’ve already asked yourself, why I’m doing this blog and the accompanying repository Raspository, which compiles a lot of code to solve sometimes completely unrelated scientific problems.
To make it short, it’s something like a toolbox and in this blog post I wanna convince you, that it is something you need as well, if you’re studying Bioinformatics, Computer Science or basically anything related.

First let me clarify that the Raspository is not my only toolbox. I also have one programmed in Java, which was my preferred programming language a few years ago. And than there’s another one in Python, which more or less the first programming language, I taught myself. There not public and I’m not here to convince, that you need a public code toolbox. There are reasons for and against a public one. Today I just want to convince you, that it’s important to have one.

I got this hint from a PhD student during my fourth Bachelor’s semester and I’m quite lucky, that I almost immediately started to realize it. But even if you’re in your Master’s already… It’s never too late. You’ve probably become much more proficient in coding as well meanwhile.

Reasons For Having a Code Toolbox

There are generally a lot of reasons, why you would need one and I will try to compile those, that seem most important to me, not in a top ten form, but in beautiful prosaic form!
The most important one probably is, that there are many problems, that you will have to solve more than once. For this case it is nice to have something like that ready to go. And there are many other benefits that come with this. Like you could say, that such a code toolbox improves your market value as a Computer Scientist.

But that reason is quite trivial… Now let’s get to one, that you probably haven’t thought of. Let’s say you finished your Master’s degree and now work on your PhD. You have to take care of a tutorial for your professor and of course you need some exercises for your students. You should probably know beforehand, if the problem you come up with is solvable and how hard it is. For this reason it’s advantageous, if you have some problems you solved yourself in your time as a student.
And probably you could even use your code for validation purposes. So it will be a big time saver for you. Which is nice, when you’re a PhD and you actually wanna focus on your research. And also in teaching time can be better spent helping students instead of coming up with new problems.

And then last, but not least it is a way to show, that you can write code. And more specifically you can do it on your own. And as not every Computer Science program requires you to code, it’s definitely something with which you can set yourself apart a bit.

What to Put Inside

As a rule of thumb I would say you can put everything inside your code toolbox that is reusable. So code, that is very specific or has hard coded file paths in it, has no place inside the toolbox.
Sometimes you will have to change your code a little bit to make it fit your toolbox and probably adding some documentation is also good.
Besides that there I can’t think of any sensible restrictions right now on what to put inside.
Probably if your toolbox becomes to complex at some point in time it could be wise to divide it into several independent packages.
But yea… What are you still waiting for? 😛 Start your own code toolbox!

Rcpp – Integrating C++ Into R


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

Using Rcpp

Following my last post I will now show you how to integrate C++ code into a R package using Rcpp. Nothing revolutionary to do, but maybe it will still help some of you. 🙂
I will demonstrate this on my GitHub repository and R package BiocStyle::Githubpkg("David-J-R/Raspository"). And I will progress on optimizing the Dithering method I implemented last time. For the following I suppose, that you already use roxygen2.

Motivation

So at first you should ask yourself why you wanna introduce C++ code to your package. One very valid reason might be, that the C++ code is already there and you just want to integrate it into R or maybe even just write a wrapper in R around it.
The main reason why I did this was to speed up my code. Some operations in R are already pretty fast. But if you want to write a function with a loop that does complicated things, that would be torture to implement with apply functions, if even possible, C++ is a good choice.

There might of course be even more reasons, but we will save them for another time…
Let’s now get to how to do it.

Preparation For Integrating Rcpp Into R Packages

Before we can seamlessly integrate C++ using into our package we need to do some preparations. First we have to import the Rcpp package into our namespace and link to its dynamic library.
For this cause we add the following to the DESCRIPTION file of the package:

Imports: Rdpack, BEclear, futile.logger, stats, dequer, jpeg,    data.table, RobustRankAggreg, igraph, graphics, grDevices, stats, methods, abind, Rcpp
LinkingTo: Rcpp

And we also have to use the dynamic that will be created in our package. For this cause we add the following statement to the Raspository.R:

#' Raspository-package
#' 
#' @author David Rasp
#' 
#' @docType package
#' 
#' @import Rdpack
#' 
#' @title The Raspository
#' 
#' @useDynLib Raspository
"_PACKAGE"

As far as I understand it you could also add this statement anywhere into the documentation of your package.

Creating our function with Rcpp

So the translation of the dissipatePixel function from last time is pretty straight forward. It’s just changing the syntax a bit. I also added a roxygen skeleton. It also works the same as in R, just that you use two backslashes instead of a hash-tag to start a line. The namespace of Rcpp provides you with classes like NumericMatrix, that can be used similar to their R counterparts and the objects in R can be treated as them. If you want some more references, I would advise you to read the Rcpp quickref by Dirk Eddelbuettel. I have it open most of the time I’m working with it.

#include <Rcpp.h>
using namespace Rcpp;


//' Calculating the Dissipation of Pixels
//' 
//' @export dissipatePixel
//' @import Rcpp
//' 
//' 
//' @references \insertRef{Hagenburg2009}{Raspository}
//' 
//' @return the image as matrix after the next time step
//'
// [[Rcpp::export]]
SEXP dissipatePixel(const SEXP& imgOriginal, const double& minimalTreshold) {
    NumericMatrix img(imgOriginal);
    NumericMatrix imgAtNewStep(img.nrow(), img.ncol());

    for(std::size_t i = 0; i < img.nrow(); i++){
        for(std::size_t j = 0; j < img.ncol(); j++){

            double currentPixel = img(i,j);
            double dissipated = 0.0;

            if(currentPixel > 1.0 || currentPixel < minimalTreshold){

                double toDissipate = currentPixel; 

                if(currentPixel > 1){
                    toDissipate -= 1.0;
                }

                double tmp1 = toDissipate / 9.0;
                double tmp2 = toDissipate / 36.0;

                if(i > 0){
                    imgAtNewStep(i - 1, j) += tmp1;
                    dissipated += tmp1;
                }

                if(j > 0){
                    imgAtNewStep(i,j - 1) += tmp1;
                    dissipated += tmp1;
                }

                if(i < img.nrow() - 1){
                    imgAtNewStep(i + 1,j) += tmp1;
                    dissipated += tmp1;
                }

                if(j < img.ncol() - 1){
                    imgAtNewStep(i,j + 1) += tmp1;
                    dissipated += tmp1;
                }

                if( i > 0 && j > 0){
                    imgAtNewStep(i - 1,j - 1) += tmp2;
                    dissipated += tmp2;
                }

                if( i > 0 && j < img.ncol() - 1){
                    imgAtNewStep(i - 1,j + 1) += tmp2;
                    dissipated += tmp2;
                }

                if( i < img.nrow() - 1 && j > 0){
                    imgAtNewStep(i + 1,j - 1) += tmp2;
                    dissipated += tmp2;
                }

                if( i < img.nrow() - 1 && j > img.ncol() - 1){
                    imgAtNewStep(i + 1,j + 1) += tmp2;
                    dissipated += tmp2;
                }


            }else{

                double tmp1 = currentPixel / 9.0;
                double tmp2 = currentPixel / 36.0;

                if( i > 1 && img(i - 1,j) > currentPixel && img(i - 1,j) < 1){
                    imgAtNewStep(i - 1,j) += tmp1;
                    dissipated += tmp1;
                }

                if( j > 0 && img(i,j - 1) > currentPixel && img(i,j - 1) < 1){
                    imgAtNewStep(i,j - 1) += tmp1;
                    dissipated += tmp1;
                }

                if(i < img.nrow() - 1 && img(i + 1,j) > currentPixel && img(i + 1,j) < 1){
                    imgAtNewStep(i + 1,j) += tmp1;
                    dissipated += tmp1;
                }

                if(j < img.ncol() - 1 && img(i,j + 1) > currentPixel && img(i,j + 1) < 1){
                    imgAtNewStep(i,j + 1) += tmp1;
                    dissipated += tmp1;
                }

                if( i > 0 && j > 0 && img(i - 1,j - 1) > currentPixel && img(i - 1,j - 1) < 1){
                    imgAtNewStep(i - 1,j - 1) += tmp2;
                    dissipated += tmp2;
                }

                if( i > 0 && j < img.ncol() - 1 && img(i - 1,j + 1) > currentPixel && img(i - 1,j + 1) < 1){
                    imgAtNewStep(i - 1,j + 1) += 1.0 / 36.0 * currentPixel;
                    dissipated += 1.0 / 36.0 * currentPixel;
                }

                if( i < img.nrow() - 1 && j > 0 && img(i + 1,j - 1) > currentPixel && img(i + 1,j - 1) < 1){
                    imgAtNewStep(i + 1,j - 1) += tmp2;
                    dissipated += tmp2;
                }

                if( i < img.nrow() - 1 && j > img.ncol() - 1 && img(i + 1,j + 1) > currentPixel && img(i + 1,j + 1) < 1){
                    imgAtNewStep(i + 1,j + 1) += tmp2;
                    dissipated += tmp2;
                }

            }

            // add the non dissipated amount to the same pixel in next time-step
            imgAtNewStep(i,j) += currentPixel - dissipated;
        }
    }

return imgAtNewStep;
}

After writing and saving this function, we have to generate the RccpExports.cpp RccpExports.R files, which basically contains a wrapper for our C++ function, which then after compiling the code can be loaded in R.
We can comfortably do this with:

Rcpp::compileAttributes()

Then we should roxygenation our package, so that the namespace and the documentation get updated. We do this with:

roxygen2::roxygenise()

Now finally we can recompile the whole package and load the new version. The simplest way imo is to just use Ctrl+Shift+B in RStudio.

Performance comparison

Of we should also check, how much faster the new version is. I’ll use microbenchmark for this cause.

library(microbenchmark)
data <- matrix(runif(625), 25, 25)
microbenchmark(dissipatePixel(data, minimalTreshold = 0.05), dissipatePixelOld(data, minimalTreshold = 0.05))
## Unit: microseconds
##                                             expr      min       lq
##     dissipatePixel(data, minimalTreshold = 0.05)   70.891   72.368
##  dissipatePixelOld(data, minimalTreshold = 0.05) 4479.941 4622.600
##        mean   median        uq        max neval
##    95.23017   77.003   82.4445   1486.321   100
##  6514.79719 4677.444 4779.1125 161310.562   100

I think the result speaks for itself. This optimization was very worth while. The version implemented in C++ is about 60 times faster. That’s a lot. This could be the difference between one second and a minute. And of course longer calculations also suck more electricity, which means they have a larger CO2 fingerprint. So be nice to our atmosphere and code effective. 😉

That’s it for now.
Later,
David.

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under src/dissipatePixel.cpp.

Lattice Boltzmann Dithering


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

Revisiting Dithering

It’s been a while since my last post and an even longer while since the last one about image processing. However I’ll just catch up, where I left off. The last post in this category was about image dithering. A common usage of dithering is the conversion of a grey scale picture to a picture consisting just out of black and white pixels, saving space and also useful for some print media. On a further note dithered images have a nice visual style, I would call something like dollar bill style.

Conventional Dithering methods however also introduce artifacts to pictures and of course they also lose sharpness. So there’s the question, if one could approve on those conventional dithering methods in order to reduce artifacts and increase sharpness.
If you’re mathematically minded, you might also be worried, that conventional dithering is not rotationally invariant, which I assume is due to the fact that the dithering kernel is not symmetric and the way one iterates over the picture. Rotationally invariant basically means, that the output of a method doesn’t change, when you rotate your input image, which is of course a desirable trait.

So what about taking some inspiration from physics? After all physical laws of motion are rotationally invariant. Lattice Boltzmann methods are a way of computing dynamics in fluids. And they have been successfully employed for dithering by Hagenburg et al (10.1007/978-3-642-10520-3_91). The paper regarding this research also served as my main source for the method and its implementation, even though my specific implementation might differ in some details.
I’ll call the method Lattice Boltzmann Dithering during the rest of the article, as I didn’t find any other handy name for it.

The Idea Behind Lattice Boltzmann Dithering

The idea behind the method is to model the picture like a fluid with particles in it. In this case the grey values are the particles. A pixel that is black (with a grey value of 0) has so to say no particles in it. Those particles dissipate in the fluid over time. The time is modelled as discrete steps. They dissipate according to the following laws during each time step:

  • If a pixel has a value v greater than 1, where 1 is the maximal grey value, (v – 1) values are distributed amongst the neighbors of the pixel and the value of original pixel becomes 1.
  • If a pixel’s value v is lower than a minimal treshold, the user can set, v is distributed over its neighbors and the pixel itself becomes 0.
  • Otherwise a fraction of the pixels value is distributed to each neighbor, that has a larger value, while being smaller than 1. This fraction is subtracted from the pixel itself.

I hope you see, how with this laws pixels with a high value (many particles) attract more values. After some time steps you should more or less only be left with black and white pixels this way.

And the whole process is stopped, when the difference from one time step to the next one becomes sufficiently small. And that’s about it… Not really that complicated, if you ask me! 🙂
Of course the paper goes more into detail about the theoretical background and proves some things about the method. If you’re interested, you should absolutely read it.

Implementation of Lattice Boltzmann Dithering

And now let’s come to the part you all waited for… The implementation. First I’ll implement a method for the dissipation of the values from each pixel, which will e executed in each time step to get the next distribution of values.

One thing to keep in mind, when working with images is, that they have borders. And as you don’t want to dissipate the values across borders or worse access a part of the image, that isn’t there. So you have to treat the pixels at the borders differently. In the case of this method, I always check first, if a neighboring pixel is there or not.

dissipatePixel <- function(img, minimalTreshold = 0.01){


    imgAtNewStep <- matrix(c(0), ncol = ncol(img), nrow = nrow(img))

    for(i in seq(nrow(img))){
        for(j in seq(ncol(img))){
            if(img[i,j] > 1 || img[i,j] < minimalTreshold){

                toDissipate <- img[i,j] 

                if(img[i,j] > 1){
                    toDissipate <- toDissipate - 1
                }

                dissipated <- 0

                if( i > 1){
                    imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if( j > 1){
                    imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if(i < nrow(img)){
                    imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if(j < ncol(img)){
                    imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 9.0 * toDissipate
                }

                if( i > 1 && j > 1){
                    imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                if( i > 1 && j < ncol(img)){
                    imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                if( i < nrow(img) && j > 1){
                    imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                if( i < nrow(img) && j > ncol(img)){
                    imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * toDissipate
                    dissipated <- dissipated + 1.0 / 36.0 * toDissipate
                }

                ## add the non dissipated amount to the same pixel in next time-step
                imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
            }else{

                dissipated <- 0
                currentPixel <- img[i,j]

                if( i > 1 && img[i - 1,j] > img[i,j] && img[i - 1,j] < 1){
                    imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if( j > 1 && img[i,j - 1] > img[i,j] && img[i,j - 1] < 1){
                    imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if(i < nrow(img) && img[i + 1,j] > img[i,j] && img[i + 1,j] < 1){
                    imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if(j < ncol(img) && img[i,j + 1] > img[i,j] && img[i,j + 1] < 1){
                    imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 9.0 * currentPixel
                }

                if( i > 1 && j > 1 && img[i - 1,j - 1] > img[i,j] && img[i - 1,j - 1] < 1){
                    imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                if( i > 1 && j < ncol(img) && img[i - 1,j + 1] > img[i,j] && img[i - 1,j + 1] < 1){
                    imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                if( i < nrow(img) && j > 1 && img[i + 1,j - 1] > img[i,j] && img[i + 1,j - 1] < 1){
                    imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                if( i < nrow(img) && j > ncol(img) && img[i + 1,j + 1] > img[i,j] && img[i + 1,j + 1] < 1){
                    imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * currentPixel
                    dissipated <- dissipated + 1.0 / 36.0 * currentPixel
                }

                ## add the non dissipated amount to the same pixel in next time-step
                imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
            }
        }
    }



    return(imgAtNewStep)

}

Done that! Now the easy part…
But as the implementation in R with loops is incredibly inefficient I’ll just run 50 time steps this time. I will however implement this method at some later point in C++, where loops aren’t inefficient. This will also serve as a good example on how to integrate C++ code in R.

lbDithering <- function(img, epsilon, minimalTreshold){

    i <- 0
    while(TRUE){

        imgAtNewStep <- dissipatePixel(img = img, minimalTreshold = minimalTreshold)

        #if(norm(imgAtNewStep - img, type = "2") < epsilon){
        if(i >= 50){
            return(imgAtNewStep)
        }else{
            img <- imgAtNewStep
        }
        i <- i +1
    }
}

Usage

Now let’s reap the fruits of our coding and test the method on a picture I’ve selected. 🙂

birb <- imageBWFromJpeg("birb.jpg")
birbDithered <- lbDithering(birb@current, epsilon = 20, minimalTreshold = 0.05)
writeJPEG(round(birbDithered), "birbDithered.jpg")

Let me show you the resulting Jpeg:

A grainy picture of two pigeons lying down, dithered with the Lattice Boltzmann Method.
Image dithered with Lattice Boltzmann Method

Isn’t that beautiful? As comparison the original:

The original picture of two pigeons lying down.
Original image

And a conventionally dithered version:

birbDithered2 <- errorDiffusiondDithering(birb, method = "mea")
writeJPEG(birbDithered2@current, "birbDithered2.jpg")
A grainy picture of two pigeons lying down, dithered with the mea dithering.
Image dithered with minimized average error

You can see more structure in the one created with Lattice Boltzmann Dithering, don’t you? And also you can better understand how light and shadows are distributed.
So overall a pretty nice algorithm I would say! Although I like the dollar bill look of the conventional one as well.

So that’s it for now! Until soon…

Yours, David

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

And well… You might’ve noticed, that I used some of my methods/classes a bit different than the last few times. Having some distance from coding this stuff I noticed I have to change some things about it. Although I don’t know yet, if I’ll make a blog post about it or not. Do you wanna read one?

Bibliography

D’Hondt Method


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

And Also Adams

In my last post I showed you the seat allocation method, called Sainte-Laguë/Schepers method. I recommend you reading it, before you continue with this post, as this post about the D’Hondt method heavily builds upon it.

Yepp, this vulture is cute!

There are actually two seat allocation methods, that are pretty similar to Sainte-Laguë/Schepers. One of them being the D’Hondt 1 method. The other one being the Adam’s method2.
Both are really remarkably similar to the method from my last post. With really the only difference being the way they round the seats. While Sainte-Laguë/Schepers uses standard round, D’Hondt uses the floor function. Meaning that it always round to the next lower integer. And Adams uses the ceiling function, which round to the next higher integer.

Now immediately you should scream: “STOP! What?! Adams uses the ceiling function?! So does this mean, that every party, that gets votes, gets at least one seat?!” The answer would be yesish. Yes, if your election doesn’t have an election threshold, every party, that gets votes, would at least get one seat.
“Well isn’t that incredibly unfavorable?” Yepp… But there are cases, where an allocation method like this could make sense. Not for regular election in my opinion, but for elections in parliaments. Let’s say, you have 20 mandates in a parliament, you want to distribute in a parliament with 300 elected politicians. Then the consideration, that it would be fair, if every party will get at least one mandate, could be made.

However the Adams method is incredibly uncommon. This Wikipedia article, which also served as my source for the method, only mentions the French parliament as example.

The D’Hondt method on the other side is pretty common. It’s actually the most common one in this year’s EU election.
And my source was also the corresponding German Wikipedia article.

Implementation D’Hondt Method

Luckily I don’t have to do much to implement those two methods. I just have to change a little bit about my function from last time.

seatAllocation <- function(votes, seats, roundMethod = round){
    ## calculate the initial divisor
    divisor <- roundMethod(sum(votes) / seats)

    ## get the initial seats per party
    seatsPerParty <- roundMethod(votes / divisor)

    ## if they already satisfy the seats to be assigned, return the seat allocation
    if(sum(seatsPerParty) == seats){
        return(seatsPerParty)
    }

    ## otherwise increment or decrement the divisor until 
    ## the result fits and then return it
    if(sum(seatsPerParty) < seats){
        while(sum(seatsPerParty) < seats){
            divisor = divisor - 1
            seatsPerParty <- roundMethod(votes / divisor)
        }
        return(seatsPerParty)
    }else{
         while(sum(seatsPerParty) > seats){
            divisor = divisor + 1
            seatsPerParty <- roundMethod(votes / divisor)
        }
        return(seatsPerParty = seatsPerParty)
    }

}

You see, what I did there? And why I love functional programming? Now by default, it’s the Sainte-Laguë/Schepers method and through giving the parameter roundMethod either the floor or ceiling function, we can make the D’Hondt and respectively Adams method out of it.
And we could even come up with some other rounding function in the future and use it.

The classic!

Test and Compare The Methods

And without further a due let’s test and compare the methods on our previous example.

votes <- c(AP = 11345, CVP = 563342, EP = 618713, OSP = 305952, PDP = 95001)
seatsSLS <- seatAllocation(votes, seats = 310, roundMethod = round)
seatsDH <- seatAllocation(votes, seats = 310, roundMethod = floor)
seatsA <- seatAllocation(votes, seats = 310, roundMethod = ceiling)

library(data.table)
DT <- rbind(data.table(party = names(seatsA), seats = seatsA, method = "Adams"), 
            data.table(party = names(seatsSLS), seats = seatsSLS, method = "Sainte-Laguë/Schepers"),
            data.table(party = names(seatsDH), seats = seatsDH, method = "D'Hondt"))

library(ggplot2)
g <- ggplot(DT, aes(x = party, y = seats, fill = method)) 
g <- g + geom_bar(stat = "identity", position = "dodge") 
g <- g + geom_text(aes(label=seats), position=position_dodge(width=0.9), vjust=-0.25)
g
Comparison of the different methods

Thanks, stackoverflow!
And you see… The actual difference isn’t big at all. The only thing one could say, is that Adams give a bonus to the small parties. D’Hondt method favors the big ones a bit. And Sainte-Laguë/Schepers is somehow in the middle.

And for me at least it’s really hard to say, which one is favorable. Sainte-Laguë/Schepers seems like a good compromise. However the differences more or less only affect small parties. But for them the difference is important. This doesn’t mean, that there’s no difference for large parties. On seat could mean the difference between majority and well… Not majority. Especially if you factor coalitions into the mix.
Maybe we will talk about possible problems in one of my next posts. I’m beginning to like this topic. I’m already thinking about becoming a lobbyist… lol.

Sainte-Laguë/Schepers method


Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/10/d771928198/htdocs/clickandbuilds/RandomThoughtsScienceandProgramming/wp-content/plugins/kblog-metadata/kblog-author.php on line 332

For Allocation of Seats in the EU Parliament

On Monday I had a talk over Discord with Boris Biba, who himself runs a blog. We wanted to do a cooperation for some time. The focus of his blog are philosophy and politics. And as I told him, that I’m interested in crunching numbers, the comming EU elections are the perfect opportunity for a cooperation.
First we talked about doing something regarding the Wahl-O-Mat. Now in hindsight it was probably good that we decided for something else, as the Wahl-O-Mat was taken offline just today.
Then Boris brought up that he wanted to a post about the seat allocation method, which is called Sainte-Laguë/Schepers method, for German votes in the EU election. And I thought to myself, that this is wonderful, as voting is basically a paradigm for statistics. So I would be able to implement a small algorithm.

So be also sure to check out the post, which you can find here, from Boris, if you’re able to read German!

More otter selfies means more creative common selfies! Like this one.

What I’ll be doing in this post, is explain you the seat allocation method called Sainte-Laguë/Schepers and then give you a demonstrative example for it. And as an easteregg I throw in some election posters for the imaginary parties, I’ll use in the example. I created those posters with Adobe Spark.

As a main source for my post, I took the corresponding article from the German Wahl-Lexikon.

Description of the Method

So there are basically three variants of this method, which all deliver the same result.
Two of them work by ranking the voting result. The other one by simple division, which is the one used for the German part of the EU election. It is either called iterative or divisor method.

The simple idea behind this divisor method is to find a divisor for the voting result, which delivers you the right amount of total seats, if you divide the voting results by it and then round them by standard rounding.

To find the right divisor, first the total amount of votes is divided by the number of seats to be assigned.

\(
divisor = \frac{\#votesTotal}{\#seats}
\)

The for each party the number of votes is divided by this divisor.

\(
seatsParty_{i} = \frac{\#votesParty_{i}}{divisor}
\)

And if the sum of the seats of all parties matches up with the amount to be assigned, we’re already done!
If not, we have to either increment or decrement the divisor depending on, if we have to few or to many seats.

Just think about that… If you increase the divisor, the amount of seats shrinks. And vice versa if you decrease the divisor, the amount of seats increases.

And so the divisor is adjusted and the final seats per party are obtained.

Who wouldn’t be proud of such a beautiful plumage?

Implementation of the Sainte-Laguë/Schepers method

And of course it wouldn’t be me, if I wouldn’t also implement the method.
Here we go…

seatAllocation <- function(votes, seats){
    ## calculate the initial divisor
    divisor <- round(sum(votes) / seats)

    ## get the initial seats per party
    seatsPerParty <- round(votes / divisor)

    ## if they already satisfy the seats to be assigned, return the seat allocation
    if(sum(seatsPerParty) == seats){
        return(list(divisor = divisor, seatsPerParty = seatsPerParty))
    }

    ## otherwise increment or decrement the divisor until 
    ## the result fits and then return it
    if(sum(seatsPerParty) < seats){
        while(sum(seatsPerParty) < seats){
            divisor = divisor - 1
            seatsPerParty <- round(votes / divisor)
        }
        return(list(divisor = divisor, seatsPerParty = seatsPerParty))
    }else{
         while(sum(seatsPerParty) > seats){
            divisor = divisor + 1
            seatsPerParty <- round(votes / divisor)
        }
        return(list(divisor = divisor, seatsPerParty = seatsPerParty))
    }

}

The function is basically the same as what I described under the last point in plain text. As always, if you have some questions or remarks regarding my implementation feel free to write me a comment!

Poor agenda or marketing gag? Hmmmm….

Example with the Sainte-Laguë/Schepers method

Now to test the method, let’s just come up with some arbitrary voting result for our imaginary parties introduced earlier. And of course plot them as a pie chart!

votes <- c(AP = 11345, CVP = 563342, EP = 618713, OSP = 305952, PDP = 95001)
votesSorted <- sort(votes, decreasing = TRUE)
names(votesSorted) <- paste0(names(votesSorted), " (", 
                               format(votesSorted/sum(votesSorted) * 100, digits = 2), "%)")
pie(votesSorted)
Pie chart of the voting result

Subsequently, let’s test what result the method delivers and if the percentages match up approximately.

result <- seatAllocation(votes, 310)

OK, first let’s visualize the result. But let’s not use a pie chart again. Because to be honest they can be misleading. This time we will use a waffle chart, which displays the actual seats.
Of course we also need to do some preprocessing. We want the parties ordered after their size and we won’t their percentage of seats in the legend.

seatsPerParty <- result$seatsPerParty
seatsPerParty <- sort(seatsPerParty, decreasing = TRUE)
names(seatsPerParty) <- paste0(names(seatsPerParty), " (", 
                               format(seatsPerParty/sum(seatsPerParty) * 100, digits = 2), "%)")
waffle::waffle(seatsPerParty)
Waffle diagram for the seat allocation

Well, there’s some difference in the percentage, but that’s to be expected as you can’t distribute fractions of seats between the parties.

Outlook

Of course there are many other methods for allocating seats in an election. Some that are equivalent to this one and others that are not. And if you’re interesting in them, I would encourage you to write me.
If you like, we can look at a bunch of them an then compare them. And we could also take a look at things like overhang seat or different kinds of voting. I think it’s a nice topic for making plots.

By the way if you also wanna read this post in German, check the following link out!