How you might have guessed since my first post about image processing, I’m really interested in photography. And for some time I’ve been trying to showcase my photography. One of those endeavors has been Instagram, with which I’m not really satisfied. Roughly speaking It’s just not the place for serious photography. It’s just the way Instagram is made… So a few days ago I stumbled upon Wiki Loves Earth, which is an annual photography contest by Wikipedia. Pictures you submit to this contest will be part of Wikimedia and be licensed under creative commons. While this basically takes away the ability to make money with my pictures, it opens up the possibility, that my pictures would be used by Wikipedia articles… Which would be nice! And the prizes for the Wiki Loves Earth contest are also nice.

Furthermore Wikipedia has become quite an important tool for basically everyone. If I want to learn about a new topic, Wikipedia is often the first place I’ll visit. And I can’t really express, how much Wikipedia helped me up until now. So it’s probably also time to give something back. And I took a lot of pictures of nature, so I have alot to contribute to this contest. But I also already uploaded some pictures from buildings I took during my travels. If you’re interested, you can look my uploads up here.

So let’s see how this competition goes for me. 🙂 I don’t count on winning anything, because there are so many much more skilled photographers out there than me. But who knows, what will come of it?

And what about you? Do you have anything to contribute to the Creative Commons? 😉

Well… Some of you might know, some won’t. I lost my smartphone a few weeks ago during a weekend trip to Bonn. Stuff like that can happen. I must have forgotten it on a bench, where I sat with my friends for a while. My panic was big, when I realized I didn’t have my phone anymore. We returned to the aforesaid bench to no avail. It wasn’t there anymore… Maybe I lost it somewhere else. I even thought, that somebody stole it. “You look sad, David”, said one of my friends to me. And I really was at that moment. It’s funny, how the loss of such a little, lifeless thing can make you sad. I don’t know, if it was its value, the suspicion someone took it or the stress it would mean to me, that made me sad. But I definitely was.

About Two Weeks Without Smartphone

The next two weeks I was without smartphone. And to make a long story short, mostly it wasn’t so bad. On the other hand I would say, it was mostly a positive experience. I found out, that I actually didn’t need Google Maps for navigating reality. Something I always suspected, but mostly I haven’t been brave enough to try it out. Furthermore it felt liberating not to be instantly reachable all the time. Not having to worry about WhatsApp or Telegram messages for once was really relaxing. And I nevertheless accomplished meeting other people. Really only the big bummer was everything where I needed a PIN via SMS for. But that made me think, that it could anyway good to have an extra smartphone for those PINs. Don’t know yet, but it might be a more robust system.

Getting My Phone Back

Well, it appears that I really lost my smartphone. I almost lost faith, but I kept looking on the homepage of the lost and found in Bonn. And one miraculous day there was my phone!!! 🙂 So nobody stole it, but most likely someone found it. Namely before we came back to the bench. And this person must’ve brought to the lost and found. I’m really thankful to this anonymous stranger. Everything’s back to normal for now. But I’m planning on changing my smartphone habits. I don’t know how yet. But I will probably keep you updated.

I showed you a lot of elementary stuff regarding image processing lately, so now it’s time to do something nice. What I will show you is called image dithering. But first let me give you a motivation for it

Let’s say you have a grayscale picture and you want it to paint with really only black and white. Therefore I will demonstrate you, what happens, if you just “round” the pixel values to black and white. For this purpose I will use a lot of functions I implemented in the previous posts. At first we have to load the picture.

Our eyes don’t treat all the colors equally, that’s why I choose this proportions for the colors to the grayscale. I took the values from this post on tutorialspoint.

Now let’s just quick an dirty convert this to completely black and white.

Wow! We certainly lost basically all the information in the picture.

Implementing Image Dithering

Before I show you how to implement a version of dithering I will shortly explain you the idea behind it. Let me ask you one question… Is there really such a thing as gray? Or how exactly would you define gray? Quite simply as a mixture of black and white. Now think about colors and the screen, you’re probably sitting in front of. You have basically only three colors (RGB!). And each of the pixels on your screen consists of three sub-pixels, one of each of those three colors. You perceive them not as individual dots, because their too close together for your eyes to distinguish them. Now’s the question, if we could something similar in this black and white case? And of course: Yes, we can! And it is called image dithering, which is by the way also applicable to colored images.

The idea now is that you iterate over all of your pixels ans apply to each of them still some kind of round function. But then you propagate the difference of the original pixel and the rounded pixels to its neighbors, that are still to be processed. But of course there are also different methods in doing so. I’ll show you to of them today.

Floyd Steinberg dithering

Let’s begin with the Floyd Steinberg Algorithm. I suggest you to read the corresponding Wikipedia article, as it is very straightforward.

And my implementation is also pretty straightforward.

floydSteinbergDithering <- function(img, transformPaletteFunction = round){
pixel <- img@current
n <- dim(pixel)[1]
m <- dim(pixel)[2]
for(y in seq(m)){
for(x in seq(n)){
oldPixel <- pixel[x,y]
newPixel <- transformPaletteFunction(oldPixel)
error <- oldPixel - newPixel
pixel[x,y] <- newPixel
if(x < n){
pixel[x + 1, y] <- pixel[x + 1, y] + error * 7/16
}
if(x > 1 && y < m){
pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 3/16
}
if(y < m){
pixel[x, y + 1] <- pixel[x, y + 1] + error * 5/16
}
if(x < n && y < m){
pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 1/16
}
}
}
ditheredImage <- new(class(img)[[1]], original = img@original,
current = pixel, operations = img@operations)
return(cropPixels(ditheredImage))
}

For the future some kind of Kernel function would be nice to be able to apply different kernels to pictures. But now let’s test it.

imgFS <- floydSteinbergDithering(img)
plot(imgFS)

That’s awesome! It almost looks like we had different gray values in our pictures. And there are just some minor artifacts introduced by it, meaning some appearent structures, that aren’t actually present in the original.

Now let’s try another method, which has a larger kernel.

Minimized Average Error Dithering

This dithering method was introduced by Jarvis et al from the famous Bell Lab in 1976. So you see that this whole field is pretty old. And some of you might remember a time, where it was actually difficult to transmit data from one location to another. I still remember being a six year old child waiting minutes on the NASA homepage to load one picture of a stellar nebular. Today image compression is of course still important for things like dynamic homepages, especially if they are mobile friendly.

OK, now let’s come to the actual method. It is called minimized average error. And again the Wikipedia article on it is pretty good.

This time the neighborhood of your pixel is increased by a range of one. Let me show you the implementation…

minimizedAverageErrorDithering <- function(img, transformPaletteFunction = round){
pixel <- img@current
n <- dim(pixel)[1]
m <- dim(pixel)[2]
for(y in seq(m)){
for(x in seq(n)){
oldPixel <- pixel[x,y]
newPixel <- transformPaletteFunction(oldPixel)
error <- oldPixel - newPixel
pixel[x,y] <- newPixel
if(x < n){
pixel[x + 1, y ] <- pixel[x + 1, y ] + error * 7/48
}
if(x < n - 1){
pixel[x + 2, y ] <- pixel[x + 2, y ] + error * 5/48
}
if(x > 2 && y < m){
pixel[x - 2, y + 1] <- pixel[x - 2, y + 1] + error * 3/48
}
if(x > 1 && y < m){
pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 5/48
}
if(y < m){
pixel[x , y + 1] <- pixel[x , y + 1] + error * 7/48
}
if(x < n && y < m){
pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 5/48
}
if(x < n - 1 && y < m){
pixel[x + 2, y + 1] <- pixel[x + 2, y + 1] + error * 3/48
}
if(x > 2 && y < m - 1){
pixel[x - 2, y + 2] <- pixel[x - 2, y + 2] + error * 1/48
}
if(x > 1 && y < m - 1){
pixel[x - 1, y + 2] <- pixel[x - 1, y + 2] + error * 3/48
}
if(y < m - 1){
pixel[x , y + 2] <- pixel[x , y + 2] + error * 5/48
}
if(x < n && y < m - 1){
pixel[x + 1, y + 2] <- pixel[x + 1, y + 2] + error * 3/48
}
if(x < n - 1 && y < m - 1){
pixel[x + 2, y + 2] <- pixel[x + 2, y + 2] + error * 1/48
}
}
}
ditheredImage <- new(class(img)[[1]], original = img@original,
current = pixel, operations = img@operations)
return(cropPixels(ditheredImage))
}

You wanna see it’s effect, don’t you? Here you go…

Do you see the difference? I think we got rid of the artifacts! Isn’t that amazing? I really love how demonstrative image processing is. But that’s it for today… See you soon!

Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

This post is now the sequel to this week’s post about color spaces. I think it will be a rather short post, where I just implement the conversion back HSV to RGB and then test my methods for demonstration. In doing so I used the corresponding wikipedia post as a rough guideline.

So let me show you how I did it.

library(Raspository)
calculateRGBvalue <- function(H, C, X, m){
if(H >= 0 && H <= 1){
return(c(m + C, m + X, m))
}else if(H >= 0 && H <= 2){
return(c(m + X, m + C, m))
}else if(H >= 0 && H <= 3){
return(c(m, m + C, m + X))
}else if(H >= 0 && H <= 4){
return(c(m, m + X, m + C))
}else if(H >= 0 && H <= 5){
return(c(m + X, m, m + C))
}else if(H >= 0 && H <= 6){
return(c(m + C, m, m + X))
}else{
return(c(0,0,0))
}
}
require(abind)

## Loading required package: abind

hsvArrayToRgb <- function(hsvArray){
# Calculate the chroma
C <- hsvArray[,,3] * hsvArray[,,2]
H<- hsvArray[,,1] / 60
X <- C * (1 - abs(H %% 2 - 1))
m <- hsvArray[,,3] - C
rgb<-mapply(FUN = calculateRGBvalue, H = H, C = C, X = X, m = m)
rgbArray<-abind(matrix(rgb[1,], nrow = nrow(hsvArray)),
matrix(rgb[2,], nrow = nrow(hsvArray)),
matrix(rgb[3,], nrow = nrow(hsvArray)),
along = 3)
return(rgbArray)
}
imageRGBFromHSV <- function(img){
return(new("imageRGB", original = hsvArrayToRgb(img@original),
current = hsvArrayToRgb(img@current),
operations = img@operations))
}

So far so good… That’s basically just an one to one implementation of the method from the Wikipedia article with the one difference being, that I add m beforehand and not later.

So let’s come to the fun part!

image <- imageRGBFromJpeg("Mountain.jpg")

## Loading required package: jpeg

plot(image)

What a ginormous mountain! But it isn’t even the tallest mountain in Saarland, believe it! Now we can also plot this as an black and white picture.

plot(imageBWFromRGB(image))

Then let’s convert to HSV and plot the S (saturation) and V (value/brightness) channel as black and white picture.

OK, and now compare this to the conversion to black and white from RGB. It’s a worse picture in my opinion, but in comparison to the other one it holds the accurate information about the lighting condition of each pixel. You see, how this could be useful in Computer Vision e.g.? Let’s just say, you’re searching for something in your pictures, that absorbs a lot of light. Then this representation could be of use.

And last, but not least we have to convert the image back to confirm that we actually get back the original picture.

plot(imageRGBFromHSV(hsv))

Looks the same! I guess that concludes this blog post. And I already have some plans for the next few posts. So, hopefully see you! 🙂

For now we have only looked at black and white pictures, but of course I know y’all are craving for me talking about color pictures! And that’s, what I will do today… After I visited a lecture about it.
What I will exactly do today is implement a S4 class for colored pictures with an RGB color space similar to one for black and white picture and then implement one with an HSV-color space and add corresponding transformation methods between the two classes.
So pretty basic stuff as well, but especially the HSV version could be pretty useful for manipulating pictures or further using them in Computer Vision procedures.

RGB Color Space

Before we have only looked at pixels with a one dimensional value between zero and one.
In a more mathematical way, we looked at a function:

If this notation confuses you… Just ignore it! But in other words: The picture is a function of two natural numbered coordinates, that gives you back a real numbered value (an intensity) between zero and one.

If we want to do the same with colored images it is useful to define each pixel as having three dimension. One for Red, one for Green and one for Blue. Therefore the name RGB.
This way we would have a function:

Again in other words: The picture is a function of two natural numbered coordinates, that gives you back a tuple^{1} of three real numbered values between zero and one. By the way this RGB color space is probably what you’re looking at right now, as computer monitors usually use it, but of course things in reality are more complicated. There are more than three wavelengths of light, but as our color vision system has only three different color receptors, a RGB color space is good enough for the human eye. But of course in scientific system, it could very well be, that you want to have more than three colors in a picture. Coming originally from Molecular Biology I know that there are many distinct wavelengths, that have some Biological importance. E.g. where certain molecules or functional groups of molecules absorb light. So you could imagine using color spaces for higher dimension for some scientific applications.

But let’s return to the topic of RGB color spaces. In programming terms one way of implementing it would be with a three dimensional array, which is pretty straight forward.
So without further ado let me show you my implementation. It’s pretty simple, as the jpeg already returns the right kind of array, when reading a jpeg.

Now I also want a method to convert an RGB image into a black and white one.

library(Raspository)
imageBWFromRGB <- function(img, chPortion = c(0.33, 0.33, 0.33)){
if(sum(chPortion) > 1){
stop("Channel portions mustn't add up to more than one.")
}
original <- img@original[,,1] * chPortion[1] + img@original[,,2] * chPortion[2] + img@original[,,3] * chPortion[3]
current <- img@current[,,1] * chPortion[1] + img@current[,,2] * chPortion[2] + img@current[,,3] * chPortion[3]
return(new("imageBW", original = original, current = current, operations = img@operations))
}

The parameter chPortion is a vector that indicates how much each channel from the original color space should contribute to the result black and white picture. As a matter of course its components aren’t allowed to add up to more than one.

HSV Color Space

Another color space I want to show you today is the HSV. While you could imagine RGB is a cube, in the same analogy HSV would be a cylinder. With one polar coordinate H, which stands for hue. This coordinate determines the color at a given pixel.
The other two coordinates are S saturation and V the value (brightness) of a pixel. Those two coordinates are rational numbers between zero and one as before.

Now you could ask, why this representation could be useful and the answers are many. But let me just name you the editing of pictures as one of them. Let’s say you have a picture that is to dark, but the colors itself are OK. Then you could use this HSV color space for changing the value/brightness, while leaving hue and saturation the same.
But of course there are many other color spaces having other neat properties you might wanna exploit.
However instead of dreaming about the perfect color space, which feels like magic, let’s just implement this one, including the conversion from RGB.
We’ll also need some helper functions for it. And please, if you don’t understand something, just ask me in the comments. There should be no shame in don’t understanding something and I will be happy to give you an explanation to help you understanding it.

imageHSV <- setClass("imageHSV", slots=list(original="array", current="array", operations="list"))
calculateHUE <- function(max, maxIndex, min, r, g, b){
h <- 0.0
if(max == min){
return(h)
}else if(maxIndex == 1){
h <- 60.0 * ((g - b)/(max - min))
}else if(maxIndex == 2){
h <- 60.0 * (2.0 + (b - r)/(max - min))
}else if(maxIndex == 3){
h <- 60.0 * (4.0 + (r - g)/(max - min))
}
# if the value is negativ add 360° to it
if(h >= 0){
return(h)
}else{
return(h + 360)
}
}
rgbArrayToHsv <- function(rgbArray){
# get the maximal color and its index in each pixel
max <- apply(rgbArray, c(1,2), max)
maxIndex <-apply(rgbArray, c(1,2), which.max)
# get the minimal color in each pixel
min <- apply(rgbArray, c(1,2), min)
# calculate the hue for each pixel
h <- mapply(FUN = calculateHUE, max = max, maxIndex = maxIndex, min = min,
r = rgbArray[,,1], g = rgbArray[,,2], b = rgbArray[,,3])
# convert vector back to matrix
h <- matrix(h, ncol = ncol(max))
# calculate saturation
s <- (max - min)/max
# set values to zero, where max is 0 (division by zero -> NA)
s[is.na(s)] <- 0
# max is equal to v (value/brightness)
v <- max
# bind matrices together to array and return
require(abind)
hsvArray <- abind(h, s, v, along = 3)
return(hsvArray)
}
imageHSVFromRGB <- function(img){
return(new("imageHSV", original = rgbArrayToHsv(img@original),
current = rgbArrayToHsv(img@current),
operations = img@operations))
}

Phew, this took me more effort than expected. Luckily I feel comfortable using the various apply functions in R. They might be alien at first, but there’s a lot of neat stuff you can do, if you can use them.
But one side note… The calculateHUE function is very time consuming, because it is called a lot. So I think it would be worth while to re-implement it at some point of time in C++ to speed up things. Probably a good exercise for the future. I’ll keep you posted. 🙂

Anyway that’s enough for me today… David tired now. 😮 The conversion from HSV to RGB will probably come the next few days.
So, see ya!

Availability Of The Code

And of course you can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageRGB.R and R/imageHSV.R.

I wrote this function months ago, while writing the report for my seminar and I wanted to my make a post about this Bookmark-Coloring Algorithm ever since, but I never found the right chance. As I yesterday in the evening wrote some documentation for the Raspository, I thought it would be nice to make a post about this specific topic. Therefor I thought that using the Bookmark-Coloring Algorithm as an example would be nice.

The Bookmark-Coloring Algorithm

This specific algorithm’s use is to generate a personal pagerank. In more detail this algorithm calculates given a starting website the chance to end up at other websites. But this idea is applicable to other fields as well. In my post about network-based integration through heat diffusion I showed you a similar method applied to a network of multi-omic data. On the same data-set you could use the Bookmark-Coloring Algorithm. The basic idea behind the Bookmark-Coloring Algorithm is that you have some color that diffuses through a network, which is in my opinion equivalent to heat diffusing in a network. Correct me, if I’m wrong.

I implemented the algorithm following the paper about it by Pavel Berkhin(10.1080/15427951.2006.10129116). More precisely I implemented the Algorithm 2 from the paper. So let me show my implementation:

require(igraph)
require(dequer)

BCA<- function(graph, v, retentionCoefficient = 0.4, tol = 0.001){
# initialise vector of transition chances
p<-c()
p[V(graph)] <- 0
q <- queue()
pushback(q, v)
# initialise vector that indicates how much color is in one node
colorInVertex <- c()
colorInVertex[v] <- 1
# execute as long queque q has elements
while(length(q) > 0){
i <- pop(q)
w <- colorInVertex[i]
# use up the color in node
colorInVertex[i] <- NA
p[i] <- p[i] + retentionCoefficient * w
# if all color is used up continuew to next element in queque
if(w < tol){
next
}
# execute for all neighbors
for(j in neighbors(graph, i, mode = "out")){
if(!is.na(colorInVertex[j])){
# add color to neighbor
colorInVertex[j] <- colorInVertex[j] +
((1 - retentionCoefficient) * w/degree(graph, i, mode = "out"))
}else{
# initialise color in neighbor
pushback(q, j)
colorInVertex[j] <- (1 - retentionCoefficient) * w/degree(graph, i, mode = "out")
}
}
}
return(p)
}

I wrote some comments, that hopefully help you to understand, what’s going on. That’s also the first part of documentation. It’s the step you’ll probably do, while writing your code and in my opinion it’s always useful.

So are we done with the documentation? No. Not, if we want to this function into a package.

roxygen2 documentation

roxygen2 is a nice package allowing you to write in-line documentation in R. I won’t go to much into detail about it here as there are lot of online sources for it^{1}, but I will show you a short example, how to do it!

Now let me show, how to write a documentation for the BCA function, I will go over all the specified tags.

#' Bookmark Coloring Algorithm
#'
#' @aliases BookmarkColoringAlgorithm
#'
#' @description This function calculates a teleportation vector from a given
#' starting node to other nodes in a given network.
#'
#' @export BCA
#' @import dequer
#' @import igraph
#'
#' @param graph an object of type \code{\link[igraph]{igraph}}.
#' @param v a starting vertex from the above graph. Can be either its identifier
#' or a igraph.vs object.
#' @param retentionCoefficient the restart probability for each node.
#' @param tol a tolerance treshold, indicating what the smalltest value of color
#' is, that should propagate further
#'
#' @return a preference/teleportation vector
#'
#' @references \insertRef{Berkhin2006}{Raspository}
#'
#' @examples
#' library(igraph)
#' g <- make_ring(5)
#' preferenceVector <- BCA(g, 1)
BCA <- function(graph, v, retentionCoefficient = 0.4, tol = 0.001){

OK, let’s go over some of the tags… In the first line of course you have the title of the function. Additional to the description, you can also add a details tag, where description should give a short overview over the method, you could include theoretical background in the details. Then needed packages are imported. roxygen2 will convert them into lines in your NAMESPACE file.

With params and return you should shortly describe their types and what they should contain.

For the following use of the references tag, you also need to import the Rdpack package and include a “REFERENCES.bib” in the inst folder with the regarding BibTeX entries. In my opinion you should always use this, when implementing some method from some source… Be it a book or a paper. Rdpack imports those references automatically from your BibTeX file into the documentation files.

Last, but not least I included a runnable example. This is important to give your user a starting point on how to use your function, but furthermore it is a test for the function. Each time your package is built, example code is run. So you will be notified, if there are any errors. But we will go more into automated testing another time. Because there is of course more you can do. But you should always write example code, if your function is visible to the user.

After writing this documentation in your file, you have to compile it by writing:

roxygen2::roxygenise()

I hope that this short example will help you writing and documenting your own functions! 🙂

Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/BCA.R.

Using the nomenclature developed in yesterday’s post I will today also implement a method for creating salt and pepper noise in images. This noise simulates dead pixels by setting them either to the lowest or highest grey value, in our case 0 or 1.

First let’s install and load the Raspository, where the methods and class developed yesterday are located in.

if (!requireNamespace("devtools", quietly = TRUE))
install.packages("devtools")
devtools::install_github("David-J-R/Raspository")

library(Raspository)

Without further ado let me show you how to implement the salt and pepper noise.

saltAndPepperNoise <- function(object, percentage = .2){
# select the indices to set to 0 or 1 at random
indices <- sample(length(object@current), length(object@current) * percentage)
# draw zeros and ones from a binomial distribution
values <- rbinom(length(indices), 1, 0.5)
object@current[indices] <- values
return(object)
}

OK, our animal guest test subject today will be an owl.

Uhm, this looks really annoying now. The other kind of noises hadn’t been that disrupting. Just like dead pixels on your screen, they make you feel bad.

Introducing Two Measures of Noise

But what would be nice now as well, would be to have some measure of the noise. Let’s just use the mean squared error for it. I’ll implement a function that can compare two pictures or the current and original picture in an imageBW object depending on the input parameters.

Nice! Now we finally have a measure for our noise. Of course this will be especially helpful later, when we want to evaluate the performance of reconstruction methods. So let’s test it.

MSE(owlNoise)

## [1] 0.05545031

MSE(owlNoise, owl)

## [1] 0.05545031

As expected^{1} both function calls leave us with the same result.

Let’s also implement the peak-signal-to-noise ratio, which gives us back a decibel value. The lower the value the more noise you have in the picture. You can calculate it according to the following formula, which I will also “simplify” a bit.

Where x and y are the two pictures to compare and MAX is the maximum value a pixel can take on.

And for MAX being one in our case we get:

\(
PSNR(x, y) = – 10 \cdot log_{10}(MSE(x, y))
\)

Subsequently, it’s now time to implement this measure as well:

PSNR <- function(object, other = NULL){
mse <- MSE(object, other)
return(-10 * log(mse, base = 10))
}

That was pretty straightforward. So let’s test it!

PSNR(owlNoise)

## [1] 12.56096

It’s a relatively low value, that indicates a lot of noise. In my opinion this value is more intuitive, probably because of use of the logarithm function. Indeed the MSE value alone didn’t seem that big.

But that’s it for now. There will be probably further stuff next week. But probably not that much. Thanks for reading and have nice remaining weekend. 🙂

Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageNoise.R. But as I will expand the code, it will also probably grow there.

Oh look it’s a pigeon on a lantern. I wonder what this little guy was thinking? But let’s not dwell on this. Rather let’s think how we can scramble up this picture!
I will first try multiplicative normal distributed noise, which is pretty straight forward, if you followed yesterday’s post.

Can you see the difference to the additive noise from yesterday? Indeed the darker areas of the pictures seem nearly untouched from the noise. That’s because the intensity of the noise is now dependent on the intensity of the pixel. And intensity is what dark pixels are lacking… So to speak.

But it’s really annoying and redundant to have to write the same stuff over and over again, isn’t it? So let’s do, what a Computer Scientist is best in. Let’s generalize some methods.
But where to start?

In my view it would be useful to have some kind of an (black and white) image class, which saves the image and operations on it. Hence the process would also be reproducible, which is always nice.

As an illustration I also overwrote the default plot function. This makes our lives even easier. In the next step I’ll implement the different noise functions. There are probably more generalized ways of doing so, but for now this will suffice. I also haven’t come up with a good way of storing the done operations yet. So I’ll probably also do that another day.

For the future I would also like to have this working with infix operators. Meaning that I could do stuff like image <- image + normalNoise(...) or image <- image * (1 + normalNoise(...)) so that I have a neat little grammar for working with images. However for the moment those functions will do the job. Now let us make use of the newly implemented methods and add a lot of noise to the picture.

Haha, there’s not much left of this poor pigeons. But you can still see its and the lamp’s outlines. And I’m anyway quite certain that there are algorithms out there, that could reconstruct a lot of the original image. You will see that later on. 🙂

Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageOneChannel.R. But as I will expand the code, it will also probably grow there.

As I wrote in one of my last posts there will be some content about image processing and computer vision. And following my second lecture of it, here we go! Fortunately R has some capabilities of handling pictures, of which I will take use.

For now the topic will just be, how to add noise to images, which can be useful to you, e.g. if you want to prove the performance of an algorithm for handling noise. So we’re simulating noise. Natural noise in images can come from different sources… The most common of it would be due to high ISO values or/and long exposure times. So in general in low light conditions. However I’ll only work with black and white pictures for today. Maybe color comes at some point in the future. 🙂

Loading Pictures Into

First I’ll show you how to load pictures into R and how to print them again. It’s quite easy tbh:

Interesting how you can still make out the goose in spite of the noise? Now let’s do the same with normal distributed noise. I’ll take the standard deviation of the black and white picture.

In the lecture we also did multiplicative and impulse noise. However I won’t do them in this post, maybe another time. Anyway, do you already see, why I like R so much? 🙂 If you do Linear Algebra and Statistics stuff it’s just super comfy! I’ll show you probably even more reasons why, when I demonstrate you, how to denoise picture again. But that’s it for today.

When I talked in my my first post about giving stuff back, I also thought about simple tutorial like installing software. While such tutorials are not strictly necessary, they save time… They saved me a lot of time! Especially when you’re new to an operating system, university, a programming language etc etc. And sometimes you’re a lot of those things at once. To put it simply, sometimes it’s just good being able to outsource something so that you can spend your time thinking about more urgent stuff.

So here we go… Today I will show how I installed Coq on my system, which is a software for writing and managing logical proofs. I need this software for my lecture Introduction to Computational Logic, which will take use of it.

For the impatient amongst you, I suggest you to skip to end… Just saying. :’D

So let me first show you, which Ubuntu release I’m using. Depending on your exact operating system, distribution and release your installation might vary.

lsb_release -d

## Description: Ubuntu 18.04.2 LTS

Alright, now that you know that, I first have to install opam, which is the package manager for OCaml. Phew… What a luck. With a Linux distribution based on Debian, it is available through the package manager. So I can just install it by typing:

sudo apt-get install opam

Subsequently I have to initialize the state of opam

opam init

Now let’s list all the through opam available packages.

opam list -a | head

## # Available packages for system:
## 0install -- The antidote to app-stores
## aacplus -- Bindings for the aacplus library whi
## abella -- Interactive theorem prover based on
## abt -- OCaml port of CMU's abstract binding
## acgtk -- Abstract Categorial Grammar developm
## acme -- A library to interact with the acme
## acpc -- Chemoinformatics tool for ligand-bas
## aez -- Alt-Ergo Zero is an OCaml library fo
## afl -- American Fuzzy Lop fuzzer by Michal
## Fatal error: exception Sys_error("Broken pipe")

I piped the output into head, because there are quite a lot of packages, with whom I don’t wanna clutter up this post. Don’t mind the Broken pipe error, that’s because head kills opam after reading its first five lines.

Which packages we now want are Coq and the CoqIDE. We can achieve this by simply typing:

opam install coq coqide

OK… Didn’t work. What did opam say?

=> This package relies on external (system) dependencies that may be missing.
`opam depext conf-gtksourceview.2' may help you find the correct
installation for your system.
The former state can be restored with:
opam switch import "~/.opam/system/backup/state-20190310192746.export"

Therefor let’s do, what the output tells us to:

opam depext conf-gtksourceview.2

And by installing some additional packages this command took care for me of it. Afterwards let’s try to install CoqIDE again, because Coq itself worked fine the first time.

opam install coqide

This time it worked… Perfect!

A Much Simpler Way

OK, after I didn’t find out how to start this IDE I did some googling and found out, that I could also just install it from the Ubuntu repository using the following command:

sudo apt-get install coqide

I should’ve probably checked this first and I would’ve probably saved a lot of time. Now it works! x) Maybe my foolishness of not checking the Ubuntu repo first teaches you something.

Have a nice evening!

We use Google Analytics in combination with anonymized IP addresses on this homepage in order to get some idea about our audience. Is it Ok for you, if we do so?OkNoPrivacy policy