# Tag Archives: Pictures

As photography is one of my hobbies I like to take pictures. And I like to learn about them. Therefore topics like theory of colors, composition and different art styles are of interest to me.
But as a Bioinformartician, a special breed of computer scientists, I’m also interested in pictures from a programming stand point.

So this area of my homepage engages with the topic of pictures at large. I started doing posts about image processing. But as I learn more and get better at photography, I will also write about more artistic topics. And I will try to share my thoughts about those topics with you.

# Decomposing An Image Into Frequencies

The Fourier transform is a useful tool to disassemble a signal into its component frequencies. Amongst others it has applications in signal processing, quantum mechanics an even in Bioinformatics for the fast calculation of multiple alignments.
So it’s no wonder, that it is also used in image processing. It is e.g. often used for the design of filters and for image convolutions in general, as it makes it possible to calculate those pretty fast. It can furthermore be used for the elimination of regular noise in images. But those are both topics for another time.

Today I will show you how to calculate the Fourier spectrum of an image, which is in any case useful to visualize what the Fourier transformation actually does. And you can also use it to investigate repeating features in your image. And from there it’s just one step further for using it for denoising an image.

## How To Calculate It

I won’t got much into the theoretical background here right now, but I will give you the means to calculate it. If you want to get more intuition how the formulas for the Fourier transform are derived, I recommend you watching the following video of 3Brown1Blue.
To understand this part you should also at least know a little bit about the concept of imaginary numbers. But maybe I’ll do a quick tutorial in them at some point later. Just let me know, if you’d like to see that.

So what we actually need is the two dimensional discrete Fourier transformation, because pictures of course are two dimensional. Instead of discrete Fourier transformations there are also continuous ones, which can be applied to continuous functions.
So let me just give you the equations:

$$\hat{f}(p,q) = \frac{1}{\sqrt{n \cdot m}} \cdot \sum_{m = 0}^{M – 1}\sum_{n = 0}^{N – 1}f(n,m) \cdot e^{-\frac{i \cdot 2 \cdot \pi \cdot p \cdot m}{M}} \cdot e^{-\frac{i \cdot 2 \cdot \pi \cdot q \cdot n}{N}}$$

In this case N is the number of rows in the image and M the number of columns. The image itself is represented as the two dimensional function $$f(n, m)$$ and $$\hat{f}(p,q)$$ is the Fourier transformed image, where p and q are the new variables for indicating row and columns. And $$i$$ is the imaginary unit. It’s a solution to the equation $$x^2 = -1$$, which cannot be solved by any real number.

Now the value of each pixel is a complex number. Do you have any idea how we could visualize this in a sensible manner? Well… Complex numbers are basically just coordinates in a two dimensional space. So we could just take their absolute distance to the origin point in their coordinate system. Applying this to every pixel gives us the so-called Fourier spectrum of an image.

In the next parts I will step by step show you how to implement all of this and do some improvements on it. Luckily for us we don’t need to implement the Fourier transform ourselves. In R it is already contained in the form of the fft function and many other programming languages provide you with high performing implementations of it. Even though implementing the Fourier transform itself isn’t that big of a deal, performance could be. But maybe I’ll do some home-grown implementation of it at a later point in time.

## Some Helper Functions

First we’ll need some helper functions. I was thinking about giving some of them their own blog post. But then again they’re not that interesting, so they would warrant that. We will also use some helper functions developed in my last post.

To make our life easier, a min and max function for images for be a nice, returning us the min and max grey-values of an image. The implementation is straight forward:

min.imageOneChannel <- function(object, na.rm = FALSE){
return(min(object@imageMatrix, na.rm = na.rm))
}
max.imageOneChannel <- function(object, na.rm = FALSE){
return(max(object@imageMatrix, na.rm = na.rm))
}

Next I’ll introduce the first of many point operations. Point operations change the value of one pixel independent of neighboring pixels. As the Fourier spectrum of an image might have values below zero or above one at certain pixels, we will need to rescale it so that it is inside the boundaries again. We can to this by with the following function:

affineRescale <- function(img){
a <- max(img) - min(img)
b <- min(img) / a

return(affineGreyscaleTransformation(img = img, 1.0/a, -b))
}

After applying this function the minimal value will be equal to zero and the maximal equal to one and all other values will be scaled between them.

## The Fourier Spectrum

So let’s implement the calculation of the Fourier spectrum of an image. You’ll see, that it is pretty straight-forward…

calculateFourierSpectrum <- function(img){

imgFTrans <- fft(img@imageMatrix)

## calculate the eiclidean norm of the complex coordinates
fourierSpectrum <- new("imageOneChannel",
imageMatrix = sqrt(Re(imgFTrans) ^ 2 +
Im(imgFTrans) ^ 2))

return(fourierSpectrum)
}

Basically just two/three (depending on if you count the return) lines of code and the nice thing about the native fft function in R, is that it keeps the dimensions of the matrix.

Now let’s load our today’s image

library(Raspository)
fence <- imageOneChannelFromJpeg("Fence.jpg")

It’s a fence, which has a lot of repeating structures. So it should give us some interesting result.

fftImg <- affineRescale(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft1.jpg")

Huh?! What’s that? Nothing? Well, you can see a bit in the corners, but not much.

## The Fourier Spectrum Scaled With The Logarithm

To make the spectrum better visible, we need to rescale it in a different manner. You remember, that the equation for the Fourier transform contains the exponent function? So what would probably be sensible would be to take the logarithm to rescale the image. But remember… The logarithm for smaller or equal to zero is not defined. So it’s a good idea to add a one before scaling with the logarithm. The log1p function in R will already do this for us.

logarithmicDynamicCompression <- function(img, c = NULL){
if(is.null(c)){
c <- 1 / log1p(max(img@imageMatrix))
}

return(new("imageOneChannel", imageMatrix = c * log1p(img@imageMatrix)))
}

So let’s try it again.

fftImg <- logarithmicDynamicCompression(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft2.jpg")

Now that’s the Fourier spectrum of the fence. But there’s something not so nice about it. The origin of the spectrum is not in the middle, but on the borders. This is a certain property of the discrete form of the Fourier transformation. In the continuous one the origin lies in the middle.

## Shifting The Fourier Spectrum Of An Image

But lucky for us there’s a quick fix for this minor problem. We just need to multiply the image with an image of the same size first. This second image contains one and minus ones arranged in a checkerboard like pattern. I won’t show you here, why this is so… But the Fourier transformation has many neat properties you can use, so that I’ll probably do a post about them in the future too.

For creating the checkerboard sign matrix I use data.table magic. Also probably doing a post or more posts about this awesome data structure in the future, as it makes of the life of a data scientist much easier.

library(data.table)
calculateFourierSpectrum <- function(img){

## generate checkerboard sign imageMatrix to shift it in the fourier domain
DT <- CJ(1:nrow(img@imageMatrix), 1:ncol(img@imageMatrix))
DT[, sign := (-1)^(V1 + V2)]
shiftMatrix <- new("imageOneChannel", imageMatrix = as.matrix(dcast(DT, V1 ~ V2, value.var = "sign")[,-1]))

shiftedImg <- img * shiftMatrix

imgFTrans <- fft(shiftedImg@imageMatrix)

## calculate the eiclidean norm
fourierSpectrum <- new("imageOneChannel",
imageMatrix = sqrt(Re(imgFTrans)^2 + Im(imgFTrans)^2))

return(fourierSpectrum)
}

Finally it’s time to also test that!

fftImg <- logarithmicDynamicCompression(calculateFourierSpectrum(fence))
writeJPEG.imageOneChannel(fftImg, "fft3.jpg")

And voila! Now I’m satisfied. 🙂
By the way the Fourier transform flips the direction of patterns. So the big vertical stripe could be the horizon. But it could also be a boundary artifacts… Something I’ll show you in more detail later. And also how to fix it.
All the stripes in the others direction represent frequencies across the picture. And what’s nice… You can change features of the image here in the Fourier space and then just as easily back-transform it.

You’ll definitely hear more from Fourier transformations from me. I hope you enjoyed this post. I’d love to hear from you.

## Availability Of The Code

You can access a maintained form of the code in my GitHub repository Raspository under R/imageOneChannel, R/imagePointOperations and R/imageFeatures.

0

# Or More Object Oriented Programming (OOP) in R

As I told you in one of my recent posts I implemented a lot of functions related to image processing and built a lot on my image processing framework in the R programming language. Changes I did include methods I learned in the lecture, e.g. convolutions, filters of different kinds and calculation of statistical features. But I also did some code refactoring and included some new helper methods.
A set of those helper methods will be the topic of this blog-post. I’m talking about arithmetic image operators. We will need those a lot in later exercises. So it’s useful to have them implemented in an easy-to-use way.

So the basic idea ist just to overload the different arithmetic operators to make them useable with images of the type imageOneChannel1.
It should be possible to use the functions either with two images or one image and one scalar.

## The is. Function

As a little helper we will implement the is.imageOneChannel function. Those functtions with the prefix is are used in R to test if any object is of a given type. It doesn’t save us much time from using the inherits function from the R base, but it makes the process at least somewhat more comfortable.
To be honest what I’ve implemented is just a glorifyed wrapper around the inherits function, but that’s generally how it is done. Of course that leaves us also with the possibility to write a little documentation using roxygen2. So here we go:

#' Is It A One Channel Image
#'
#' @description Checks if an object is of type \code{\link[Raspository]{imageOneChannel}}.
#'
#' @export
#'
#' @param x any R object.
#'
#' @return \code{TRUE}, if the object is of type \code{\link[Raspository]{imageOneChannel}}.
#'  \code{FALSE} otherwise.
#'
#'
#' @examples
is.imageOneChannel <-function(x){
return(inherits(x, "imageOneChannel"))
}


So, that was easy. And it’s not going to get any harder this post actually.

## Arithmetic Image Operators

Overloading an arithmetic operator in R is pretty simple as well. Take a look at the following:

+.imageOneChannel <- function(A, B)

That will be the function header of our plus operator. Let’s analyse it bit by bit.
First the function name is sorrounded by backticks. That’s the way, how you define so called infix operators2, as they contain special symbols, which function names normally can’t.

Then we have the plus sign and after the dot the class name imageOneChannel, which indicates that this is the plus operator for this specific class. Later you don’t call this function as +.imageOneChannel, but just as plus. And as far I noticed it this special operator is always called, as long as one of the two parameters is of type imageOneChannel. But actually I don’t know what would happen, if the other summand would be of any other class, that also has its own plus operator defined. Anyone an idea? But I’ll probably test that at some point of time.
And last like in any other function <- function(A, B) assigns the function to the function name. As a plus function this contains just two parameters.

Now let me show you my full implementation of this arithmetic image operator with documentation:

#' Image Addition
#'
#' @description Addition of one image with another image or a scalar. At least
#' one of the two parameters has to be an image. If both are images, the operation
#' is executed entrywise.
#'
#' @export
#'
#' @param A An image of class \code{\link[Raspository]{imageOneChannel}} or
#' a scalar.
#' @param B An image of class \code{\link[Raspository]{imageOneChannel}} or
#' a scalar.
#'
#' @return The resulting image of class \code{\link[Raspository]{imageOneChannel}}.
#'
#'
#' @examples
+.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix + B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A + B@imageMatrix))
}

}else{
return(new("imageOneChannel", imageMatrix = A@imageMatrix + B))
}

}


Pretty easy, isn’t it? We just need to check which one of the two parameters is an image and then access its matrix slot.
I mean for now our imageOneChannel also isn’t much more than a wrapper around a standard matrix in R. But that’s ok, because we definitly want to have some different behaviour defined later on than in a standard matrix. And We also need some methods, that not necessarily make sense for matrices containing other things than pixels.
Some people might know, that I’m not the biggest proponent of OOP and I actually think it’s overused. But if you have some data type, that should express some specific to it behaviour, it’s sensible. Just don’t implement a convoluted inheritance hierarchy with lots abd lots of stuff, you’ll never need and people need at least a PhD to understand. Keep it simple, stupid!

The other arithmetic image operators are implemented in the same manner. I’ll spare you the documentations this time, as they’re essentially all the same, but you can find them in my repository:

-.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix - B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A - B@imageMatrix))
}
}else{
return(new("imageOneChannel", imageMatrix  = A@imageMatrix - B))
}

}
*.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix * B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A * B@imageMatrix))
}
}else{
return(new("imageOneChannel", imageMatrix = A@imageMatrix * B))
}

}
/.imageOneChannel <- function(A, B){
if(is.imageOneChannel(B)){
if(is.imageOneChannel(A)){
return(new("imageOneChannel", imageMatrix = A@imageMatrix / B@imageMatrix))
}else{
return(new("imageOneChannel", imageMatrix = A / B@imageMatrix))
}
}else{
return(new("imageOneChannel", imageMatrix = A@imageMatrix / B))
}

}

## Testing The Arithmetic Image Operators

library(Raspository)

We can now do lot’s and lots of things with those operators. So first let’s load our test image…

kadser <- imageOneChannelFromJpeg("kadser.jpg")

Without looking at the original image let’s take a look at the negative of the picture.

negativeKadser <- 1 - kadser
writeJPEG.imageOneChannel(negativeKadser, "negativeKadser.jpg")

Spoopy, isn’t it?

And now let’s add this picture together with another one. We’ll have to average them, so that we don’t exceed the maximal grey-value.

owl0r <- imageOneChannelFromJpeg("owl0r.jpg")
writeJPEG.imageOneChannel(addedImages, "addedImages.jpg")

Well, this added image won’t win us any contests, but at least we can say now, that we can add images together. That will be valuable later. You’ll see.

That’salso it for today. See you soon!

## Availability Of The Code

You can access a maintained form of the code in my github repository Raspository under R/imageOperations.R and R/imageOneChannel.R.

0

# Revisiting Dithering

It’s been a while since my last post and an even longer while since the last one about image processing. However I’ll just catch up, where I left off. The last post in this category was about image dithering. A common usage of dithering is the conversion of a grey scale picture to a picture consisting just out of black and white pixels, saving space and also useful for some print media. On a further note dithered images have a nice visual style, I would call something like dollar bill style.

Conventional Dithering methods however also introduce artifacts to pictures and of course they also lose sharpness. So there’s the question, if one could approve on those conventional dithering methods in order to reduce artifacts and increase sharpness.
If you’re mathematically minded, you might also be worried, that conventional dithering is not rotationally invariant, which I assume is due to the fact that the dithering kernel is not symmetric and the way one iterates over the picture. Rotationally invariant basically means, that the output of a method doesn’t change, when you rotate your input image, which is of course a desirable trait.

So what about taking some inspiration from physics? After all physical laws of motion are rotationally invariant. Lattice Boltzmann methods are a way of computing dynamics in fluids. And they have been successfully employed for dithering by Hagenburg et al . The paper regarding this research also served as my main source for the method and its implementation, even though my specific implementation might differ in some details.
I’ll call the method Lattice Boltzmann Dithering during the rest of the article, as I didn’t find any other handy name for it.

## The Idea Behind Lattice Boltzmann Dithering

The idea behind the method is to model the picture like a fluid with particles in it. In this case the grey values are the particles. A pixel that is black (with a grey value of 0) has so to say no particles in it. Those particles dissipate in the fluid over time. The time is modelled as discrete steps. They dissipate according to the following laws during each time step:

• If a pixel has a value v greater than 1, where 1 is the maximal grey value, (v – 1) values are distributed amongst the neighbors of the pixel and the value of original pixel becomes 1.
• If a pixel’s value v is lower than a minimal treshold, the user can set, v is distributed over its neighbors and the pixel itself becomes 0.
• Otherwise a fraction of the pixels value is distributed to each neighbor, that has a larger value, while being smaller than 1. This fraction is subtracted from the pixel itself.

I hope you see, how with this laws pixels with a high value (many particles) attract more values. After some time steps you should more or less only be left with black and white pixels this way.

And the whole process is stopped, when the difference from one time step to the next one becomes sufficiently small. And that’s about it… Not really that complicated, if you ask me! 🙂
Of course the paper goes more into detail about the theoretical background and proves some things about the method. If you’re interested, you should absolutely read it.

## Implementation of Lattice Boltzmann Dithering

And now let’s come to the part you all waited for… The implementation. First I’ll implement a method for the dissipation of the values from each pixel, which will e executed in each time step to get the next distribution of values.

One thing to keep in mind, when working with images is, that they have borders. And as you don’t want to dissipate the values across borders or worse access a part of the image, that isn’t there. So you have to treat the pixels at the borders differently. In the case of this method, I always check first, if a neighboring pixel is there or not.

dissipatePixel <- function(img, minimalTreshold = 0.01){

imgAtNewStep <- matrix(c(0), ncol = ncol(img), nrow = nrow(img))

for(i in seq(nrow(img))){
for(j in seq(ncol(img))){
if(img[i,j] > 1 || img[i,j] < minimalTreshold){

toDissipate <- img[i,j]

if(img[i,j] > 1){
toDissipate <- toDissipate - 1
}

dissipated <- 0

if( i > 1){
imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if( j > 1){
imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if(i < nrow(img)){
imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if(j < ncol(img)){
imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * toDissipate
dissipated <- dissipated + 1.0 / 9.0 * toDissipate
}

if( i > 1 && j > 1){
imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

if( i > 1 && j < ncol(img)){
imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

if( i < nrow(img) && j > 1){
imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

if( i < nrow(img) && j > ncol(img)){
imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * toDissipate
dissipated <- dissipated + 1.0 / 36.0 * toDissipate
}

## add the non dissipated amount to the same pixel in next time-step
imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
}else{

dissipated <- 0
currentPixel <- img[i,j]

if( i > 1 && img[i - 1,j] > img[i,j] && img[i - 1,j] < 1){
imgAtNewStep[i - 1,j] <- imgAtNewStep[i - 1,j] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if( j > 1 && img[i,j - 1] > img[i,j] && img[i,j - 1] < 1){
imgAtNewStep[i,j - 1] <- imgAtNewStep[i,j - 1] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if(i < nrow(img) && img[i + 1,j] > img[i,j] && img[i + 1,j] < 1){
imgAtNewStep[i + 1,j] <- imgAtNewStep[i + 1,j] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if(j < ncol(img) && img[i,j + 1] > img[i,j] && img[i,j + 1] < 1){
imgAtNewStep[i,j + 1] <- imgAtNewStep[i,j + 1] + 1.0 / 9.0 * currentPixel
dissipated <- dissipated + 1.0 / 9.0 * currentPixel
}

if( i > 1 && j > 1 && img[i - 1,j - 1] > img[i,j] && img[i - 1,j - 1] < 1){
imgAtNewStep[i - 1,j - 1] <- imgAtNewStep[i - 1,j - 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

if( i > 1 && j < ncol(img) && img[i - 1,j + 1] > img[i,j] && img[i - 1,j + 1] < 1){
imgAtNewStep[i - 1,j + 1] <- imgAtNewStep[i - 1,j + 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

if( i < nrow(img) && j > 1 && img[i + 1,j - 1] > img[i,j] && img[i + 1,j - 1] < 1){
imgAtNewStep[i + 1,j - 1] <- imgAtNewStep[i + 1,j - 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

if( i < nrow(img) && j > ncol(img) && img[i + 1,j + 1] > img[i,j] && img[i + 1,j + 1] < 1){
imgAtNewStep[i + 1,j + 1] <- imgAtNewStep[i + 1,j + 1] + 1.0 / 36.0 * currentPixel
dissipated <- dissipated + 1.0 / 36.0 * currentPixel
}

## add the non dissipated amount to the same pixel in next time-step
imgAtNewStep[i,j] <- imgAtNewStep[i,j] + (img[i,j] - dissipated)
}
}
}

return(imgAtNewStep)

}


Done that! Now the easy part…
But as the implementation in R with loops is incredibly inefficient I’ll just run 50 time steps this time. I will however implement this method at some later point in C++, where loops aren’t inefficient. This will also serve as a good example on how to integrate C++ code in R.

lbDithering <- function(img, epsilon, minimalTreshold){

i <- 0
while(TRUE){

imgAtNewStep <- dissipatePixel(img = img, minimalTreshold = minimalTreshold)

#if(norm(imgAtNewStep - img, type = "2") < epsilon){
if(i >= 50){
return(imgAtNewStep)
}else{
img <- imgAtNewStep
}
i <- i +1
}
}


## Usage

Now let’s reap the fruits of our coding and test the method on a picture I’ve selected. 🙂

birb <- imageBWFromJpeg("birb.jpg")
birbDithered <- lbDithering(birb@current, epsilon = 20, minimalTreshold = 0.05)
writeJPEG(round(birbDithered), "birbDithered.jpg")


Let me show you the resulting Jpeg:

Isn’t that beautiful? As comparison the original:

And a conventionally dithered version:

birbDithered2 <- errorDiffusiondDithering(birb, method = "mea")
writeJPEG(birbDithered2@current, "birbDithered2.jpg")


You can see more structure in the one created with Lattice Boltzmann Dithering, don’t you? And also you can better understand how light and shadows are distributed.
So overall a pretty nice algorithm I would say! Although I like the dollar bill look of the conventional one as well.

So that’s it for now! Until soon…

Yours, David

## Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

And well… You might’ve noticed, that I used some of my methods/classes a bit different than the last few times. Having some distance from coding this stuff I noticed I have to change some things about it. Although I don’t know yet, if I’ll make a blog post about it or not. Do you wanna read one?

0

## And So Do I

How you might have guessed since my first post about image processing, I’m really interested in photography. And for some time I’ve been trying to showcase my photography. One of those endeavors has been Instagram, with which I’m not really satisfied. Roughly speaking It’s just not the place for serious photography. It’s just the way Instagram is made…
So a few days ago I stumbled upon Wiki Loves Earth, which is an annual photography contest by Wikipedia. Pictures you submit to this contest will be part of Wikimedia and be licensed under creative commons. While this basically takes away the ability to make money with my pictures, it opens up the possibility, that my pictures would be used by Wikipedia articles… Which would be nice!
And the prizes for the Wiki Loves Earth contest are also nice.

Furthermore Wikipedia has become quite an important tool for basically everyone. If I want to learn about a new topic, Wikipedia is often the first place I’ll visit. And I can’t really express, how much Wikipedia helped me up until now. So it’s probably also time to give something back.
And I took a lot of pictures of nature, so I have alot to contribute to this contest. But I also already uploaded some pictures from buildings I took during my travels. If you’re interested, you can look my uploads up here.

So let’s see how this competition goes for me. 🙂 I don’t count on winning anything, because there are so many much more skilled photographers out there than me. But who knows, what will come of it?

And what about you? Do you have anything to contribute to the Creative Commons? 😉

0

## Image Dithering

I showed you a lot of elementary stuff regarding image processing lately, so now it’s time to do something nice. What I will show you is called image dithering. But first let me give you a motivation for it

Let’s say you have a grayscale picture and you want it to paint with really only black and white. Therefore I will demonstrate you, what happens, if you just “round” the pixel values to black and white. For this purpose I will use a lot of functions I implemented in the previous posts. At first we have to load the picture.

library(Raspository)
imgColor <- imageRGBFromJpeg("Wagner.jpg")
plot(imgColor)


Next step is, that we have to convert it to grayscale. Remember the function I implemented in this post?

img <- imageBWFromRGB(imgColor, c(0.3, 0.59, 0.11))
plot(img)


Our eyes don’t treat all the colors equally, that’s why I choose this proportions for the colors to the grayscale. I took the values from this post on tutorialspoint.

Now let’s just quick an dirty convert this to completely black and white.

imgTmp <- img
imgTmp@current <- round(imgTmp@current)
plot(imgTmp)


Wow! We certainly lost basically all the information in the picture.

## Implementing Image Dithering

Before I show you how to implement a version of dithering I will shortly explain you the idea behind it. Let me ask you one question… Is there really such a thing as gray? Or how exactly would you define gray?
Quite simply as a mixture of black and white. Now think about colors and the screen, you’re probably sitting in front of. You have basically only three colors (RGB!). And each of the pixels on your screen consists of three sub-pixels, one of each of those three colors.
You perceive them not as individual dots, because their too close together for your eyes to distinguish them. Now’s the question, if we could something similar in this black and white case? And of course: Yes, we can! And it is called image dithering, which is by the way also applicable to colored images.

The idea now is that you iterate over all of your pixels ans apply to each of them still some kind of round function. But then you propagate the difference of the original pixel and the rounded pixels to its neighbors, that are still to be processed.
But of course there are also different methods in doing so. I’ll show you to of them today.

### Floyd Steinberg dithering

Let’s begin with the Floyd Steinberg Algorithm. I suggest you to read the corresponding Wikipedia article, as it is very straightforward.

And my implementation is also pretty straightforward.

floydSteinbergDithering <- function(img, transformPaletteFunction = round){
pixel <- img@current

n <- dim(pixel)[1]
m <- dim(pixel)[2]

for(y in seq(m)){
for(x in seq(n)){

oldPixel <- pixel[x,y]
newPixel <- transformPaletteFunction(oldPixel)

error <- oldPixel - newPixel

pixel[x,y] <- newPixel

if(x < n){
pixel[x + 1, y] <- pixel[x + 1, y] + error * 7/16
}

if(x > 1 && y < m){
pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 3/16
}

if(y < m){
pixel[x, y + 1] <- pixel[x, y + 1] + error * 5/16
}

if(x < n && y < m){
pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 1/16
}

}
}

ditheredImage <- new(class(img)[[1]], original = img@original,
current = pixel, operations = img@operations)

return(cropPixels(ditheredImage))
}


For the future some kind of Kernel function would be nice to be able to apply different kernels to pictures. But now let’s test it.

imgFS <- floydSteinbergDithering(img)
plot(imgFS)


That’s awesome! It almost looks like we had different gray values in our pictures. And there are just some minor artifacts introduced by it, meaning some appearent structures, that aren’t actually present in the original.

Now let’s try another method, which has a larger kernel.

### Minimized Average Error Dithering

This dithering method was introduced by Jarvis et al from the famous Bell Lab in 1976. So you see that this whole field is pretty old. And some of you might remember a time, where it was actually difficult to transmit data from one location to another. I still remember being a six year old child waiting minutes on the NASA homepage to load one picture of a stellar nebular. Today image compression is of course still important for things like dynamic homepages, especially if they are mobile friendly.

OK, now let’s come to the actual method. It is called minimized average error. And again the Wikipedia article on it is pretty good.

This time the neighborhood of your pixel is increased by a range of one. Let me show you the implementation…

minimizedAverageErrorDithering <- function(img, transformPaletteFunction = round){
pixel <- img@current

n <- dim(pixel)[1]
m <- dim(pixel)[2]

for(y in seq(m)){
for(x in seq(n)){

oldPixel <- pixel[x,y]
newPixel <- transformPaletteFunction(oldPixel)

error <- oldPixel - newPixel

pixel[x,y] <- newPixel

if(x < n){
pixel[x + 1, y    ] <- pixel[x + 1, y    ] + error * 7/48
}
if(x < n - 1){
pixel[x + 2, y    ] <- pixel[x + 2, y    ] + error * 5/48
}

if(x > 2 && y < m){
pixel[x - 2, y + 1] <- pixel[x - 2, y + 1] + error * 3/48
}

if(x > 1 && y < m){
pixel[x - 1, y + 1] <- pixel[x - 1, y + 1] + error * 5/48
}

if(y < m){
pixel[x    , y + 1] <- pixel[x    , y + 1] + error * 7/48
}

if(x < n && y < m){
pixel[x + 1, y + 1] <- pixel[x + 1, y + 1] + error * 5/48
}

if(x < n - 1 && y < m){
pixel[x + 2, y + 1] <- pixel[x + 2, y + 1] + error * 3/48
}

if(x > 2 && y < m - 1){
pixel[x - 2, y + 2] <- pixel[x - 2, y + 2] + error * 1/48
}

if(x > 1 && y < m - 1){
pixel[x - 1, y + 2] <- pixel[x - 1, y + 2] + error * 3/48
}

if(y < m - 1){
pixel[x    , y + 2] <- pixel[x    , y + 2] + error * 5/48
}

if(x < n && y < m - 1){
pixel[x + 1, y + 2] <- pixel[x + 1, y + 2] + error * 3/48
}

if(x < n - 1 && y < m - 1){
pixel[x + 2, y + 2] <- pixel[x + 2, y + 2] + error * 1/48
}
}
}

ditheredImage <- new(class(img)[[1]], original = img@original,
current = pixel, operations = img@operations)

return(cropPixels(ditheredImage))
}


You wanna see it’s effect, don’t you? Here you go…

imgMea <- minimizedAverageErrorDithering(img)
plot(imgMea)


Do you see the difference? I think we got rid of the artifacts! Isn’t that amazing? I really love how demonstrative image processing is.
But that’s it for today… See you soon!

## Availability Of The Code

You can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageConversion.R.

0

## Converting HSV To RGB

This post is now the sequel to this week’s post about color spaces. I think it will be a rather short post, where I just implement the conversion back HSV to RGB and then test my methods for demonstration.
In doing so I used the corresponding wikipedia post as a rough guideline.

So let me show you how I did it.

library(Raspository)
calculateRGBvalue <- function(H, C, X, m){
if(H >= 0 && H <= 1){
return(c(m + C, m + X, m))
}else if(H >= 0 && H <= 2){
return(c(m + X, m + C, m))
}else if(H >= 0 && H <= 3){
return(c(m, m + C, m + X))
}else if(H >= 0 && H <= 4){
return(c(m, m + X, m + C))
}else if(H >= 0 && H <= 5){
return(c(m + X, m, m + C))
}else if(H >= 0 && H <= 6){
return(c(m + C, m, m + X))
}else{
return(c(0,0,0))
}
}
require(abind)

## Loading required package: abind

hsvArrayToRgb <- function(hsvArray){

# Calculate the chroma
C <- hsvArray[,,3] * hsvArray[,,2]

H<- hsvArray[,,1] / 60

X <-  C * (1 - abs(H %% 2 - 1))

m <- hsvArray[,,3] - C
rgb<-mapply(FUN = calculateRGBvalue, H = H, C = C, X = X, m = m)

rgbArray<-abind(matrix(rgb[1,], nrow = nrow(hsvArray)),
matrix(rgb[2,], nrow = nrow(hsvArray)),
matrix(rgb[3,], nrow = nrow(hsvArray)),
along = 3)

return(rgbArray)
}

imageRGBFromHSV <- function(img){
return(new("imageRGB", original = hsvArrayToRgb(img@original),
current = hsvArrayToRgb(img@current),
operations = img@operations))
}


So far so good… That’s basically just an one to one implementation of the method from the Wikipedia article with the one difference being, that I add m beforehand and not later.

So let’s come to the fun part!

image <- imageRGBFromJpeg("Mountain.jpg")

## Loading required package: jpeg

plot(image)


What a ginormous mountain! But it isn’t even the tallest mountain in Saarland, believe it! Now we can also plot this as an black and white picture.

plot(imageBWFromRGB(image))


Then let’s convert to HSV and plot the S (saturation) and V (value/brightness) channel as black and white picture.

hsv <- imageHSVFromRGB(image)
plot(as.raster(hsv@current[,,2]))


How that looks like something from the 90s!

plot(as.raster(hsv@current[,,3]))


OK, and now compare this to the conversion to black and white from RGB. It’s a worse picture in my opinion, but in comparison to the other one it holds the accurate information about the lighting condition of each pixel. You see, how this could be useful in Computer Vision e.g.?
Let’s just say, you’re searching for something in your pictures, that absorbs a lot of light. Then this representation could be of use.

And last, but not least we have to convert the image back to confirm that we actually get back the original picture.

plot(imageRGBFromHSV(hsv))


Looks the same! I guess that concludes this blog post. And I already have some plans for the next few posts.
So, hopefully see you! 🙂

0

## Introduction To Color Space

For now we have only looked at black and white pictures, but of course I know y’all are craving for me talking about color pictures! And that’s, what I will do today… After I visited a lecture about it.
What I will exactly do today is implement a S4 class for colored pictures with an RGB color space similar to one for black and white picture and then implement one with an HSV-color space and add corresponding transformation methods between the two classes.
So pretty basic stuff as well, but especially the HSV version could be pretty useful for manipulating pictures or further using them in Computer Vision procedures.

## RGB Color Space

Before we have only looked at pixels with a one dimensional value between zero and one.
In a more mathematical way, we looked at a function:

$$\mathbb{N} \times \mathbb{N} \rightarrow [0,1]$$

If this notation confuses you… Just ignore it! But in other words: The picture is a function of two natural numbered coordinates, that gives you back a real numbered value (an intensity) between zero and one.

If we want to do the same with colored images it is useful to define each pixel as having three dimension. One for Red, one for Green and one for Blue. Therefore the name RGB.
This way we would have a function:

$$\mathbb{N} \times \mathbb{N} \rightarrow [0,1] \times [0,1] \times [0,1]$$

Again in other words: The picture is a function of two natural numbered coordinates, that gives you back a tuple1 of three real numbered values between zero and one.
By the way this RGB color space is probably what you’re looking at right now, as computer monitors usually use it, but of course things in reality are more complicated. There are more than three wavelengths of light, but as our color vision system has only three different color receptors, a RGB color space is good enough for the human eye. But of course in scientific system, it could very well be, that you want to have more than three colors in a picture. Coming originally from Molecular Biology I know that there are many distinct wavelengths, that have some Biological importance. E.g. where certain molecules or functional groups of molecules absorb light.
So you could imagine using color spaces for higher dimension for some scientific applications.

But let’s return to the topic of RGB color spaces. In programming terms one way of implementing it would be with a three dimensional array, which is pretty straight forward.
So without further ado let me show you my implementation. It’s pretty simple, as the jpeg already returns the right kind of array, when reading a jpeg.

imageRGB <- setClass("imageRGB", slots=list(original="array", current="array", operations="list"))
imageRGBFromJpeg <-function(pathToJpeg){
require(jpeg)
return(new("imageRGB", original = image, current = image, operations = list()))
}
plot.imageRGB <- function(object){
plot(as.raster(object@current))}


## Converting To Black And White

Now I also want a method to convert an RGB image into a black and white one.

library(Raspository)
imageBWFromRGB <- function(img, chPortion = c(0.33, 0.33, 0.33)){
if(sum(chPortion) > 1){
stop("Channel portions mustn't add up to more than one.")
}
original <- img@original[,,1] * chPortion[1] + img@original[,,2] * chPortion[2] + img@original[,,3] * chPortion[3]
current <- img@current[,,1] * chPortion[1] + img@current[,,2] * chPortion[2] + img@current[,,3] * chPortion[3]
return(new("imageBW", original = original, current = current, operations = img@operations))
}


The parameter chPortion is a vector that indicates how much each channel from the original color space should contribute to the result black and white picture. As a matter of course its components aren’t allowed to add up to more than one.

## HSV Color Space

Another color space I want to show you today is the HSV. While you could imagine RGB is a cube, in the same analogy HSV would be a cylinder. With one polar coordinate H, which stands for hue. This coordinate determines the color at a given pixel.
The other two coordinates are S saturation and V the value (brightness) of a pixel. Those two coordinates are rational numbers between zero and one as before.

Now you could ask, why this representation could be useful and the answers are many. But let me just name you the editing of pictures as one of them. Let’s say you have a picture that is to dark, but the colors itself are OK. Then you could use this HSV color space for changing the value/brightness, while leaving hue and saturation the same.
But of course there are many other color spaces having other neat properties you might wanna exploit.
However instead of dreaming about the perfect color space, which feels like magic, let’s just implement this one, including the conversion from RGB.
We’ll also need some helper functions for it. And please, if you don’t understand something, just ask me in the comments. There should be no shame in don’t understanding something and I will be happy to give you an explanation to help you understanding it.

imageHSV <- setClass("imageHSV", slots=list(original="array", current="array", operations="list"))

calculateHUE <- function(max, maxIndex, min, r, g, b){
h <- 0.0
if(max == min){
return(h)
}else if(maxIndex == 1){
h <- 60.0 * ((g - b)/(max - min))
}else if(maxIndex == 2){
h <- 60.0 * (2.0 + (b - r)/(max - min))
}else if(maxIndex == 3){
h <- 60.0 * (4.0 + (r - g)/(max - min))
}

# if the value is negativ add 360° to it
if(h >= 0){
return(h)
}else{
return(h + 360)
}
}

rgbArrayToHsv <- function(rgbArray){
# get the maximal color and its index in each pixel
max <- apply(rgbArray, c(1,2), max)
maxIndex <-apply(rgbArray, c(1,2), which.max)
# get the minimal color in each pixel
min <- apply(rgbArray, c(1,2), min)

# calculate the hue for each pixel
h <- mapply(FUN = calculateHUE, max = max, maxIndex = maxIndex, min = min,
r = rgbArray[,,1], g = rgbArray[,,2], b = rgbArray[,,3])
# convert vector back to matrix
h <- matrix(h, ncol = ncol(max))

# calculate saturation
s <- (max - min)/max
# set values to zero, where max is 0 (division by zero -> NA)
s[is.na(s)] <- 0
# max is equal to v (value/brightness)
v <- max

# bind matrices together to array and return
require(abind)
hsvArray <- abind(h, s, v, along = 3)
return(hsvArray)
}

imageHSVFromRGB <- function(img){
return(new("imageHSV", original = rgbArrayToHsv(img@original),
current = rgbArrayToHsv(img@current),
operations = img@operations))
}


Phew, this took me more effort than expected. Luckily I feel comfortable using the various apply functions in R. They might be alien at first, but there’s a lot of neat stuff you can do, if you can use them.
But one side note… The calculateHUE function is very time consuming, because it is called a lot. So I think it would be worth while to re-implement it at some point of time in C++ to speed up things. Probably a good exercise for the future. I’ll keep you posted. 🙂

Anyway that’s enough for me today… David tired now. 😮 The conversion from HSV to RGB will probably come the next few days.
So, see ya!

## Availability Of The Code

And of course you can access a maintained version of the code for the color spaces in my GitHub repository Raspository under R/imageRGB.R and R/imageHSV.R.

0

# And Measuring Noise

Using the nomenclature developed in yesterday’s post I will today also implement a method for creating salt and pepper noise in images. This noise simulates dead pixels by setting them either to the lowest or highest grey value, in our case 0 or 1.

First let’s install and load the Raspository, where the methods and class developed yesterday are located in.

if (!requireNamespace("devtools", quietly = TRUE))
install.packages("devtools")
devtools::install_github("David-J-R/Raspository")

library(Raspository)


Without further ado let me show you how to implement the salt and pepper noise.

saltAndPepperNoise <- function(object, percentage = .2){
# select the indices to set to 0 or 1 at random
indices <- sample(length(object@current), length(object@current) * percentage)
# draw zeros and ones from a binomial distribution
values <- rbinom(length(indices), 1, 0.5)

object@current[indices] <- values
return(object)
}


OK, our animal guest test subject today will be an owl.

set.seed(1)
owl <- imageBWFromJpeg("owl.jpg")
plot(owl)


It looks surprised, doesn’t it? It probably knows about the salt and pepper noise we will add right away.

owlNoise <- saltAndPepperNoise(owl)
plot(owlNoise)


Uhm, this looks really annoying now. The other kind of noises hadn’t been that disrupting. Just like dead pixels on your screen, they make you feel bad.

## Introducing Two Measures of Noise

But what would be nice now as well, would be to have some measure of the noise. Let’s just use the mean squared error for it. I’ll implement a function that can compare two pictures or the current and original picture in an imageBW object depending on the input parameters.

MSE <- function(object, other = NULL){
if(is.null(other)){
errorMatrix <- object@original - object@current
}else{
errorMatrix <- other@current - object@current
}
squaredErrorMatrix <- errorMatrix ^ 2
return(mean(squaredErrorMatrix))
}


Nice! Now we finally have a measure for our noise. Of course this will be especially helpful later, when we want to evaluate the performance of reconstruction methods.
So let’s test it.

MSE(owlNoise)

## [1] 0.05545031

MSE(owlNoise, owl)

## [1] 0.05545031


As expected1 both function calls leave us with the same result.

Let’s also implement the peak-signal-to-noise ratio, which gives us back a decibel value. The lower the value the more noise you have in the picture. You can calculate it according to the following formula, which I will also “simplify” a bit.

$$PSNR(x, y) = 10 \cdot log_{10} \left( \frac{MAX^{2}}{MSE(x, y)} \right) \$$

I hope you know the logarithm rules. 😛

$$PSNR(x, y) = 10 \cdot log_{10}(MAX^{2}) – 10 \cdot log_{10}(MSE(x, y)) \$$ $$PSNR(x, y) = 20 \cdot log_{10}(MAX) – 10 \cdot log_{10}(MSE(x, y))$$

Where x and y are the two pictures to compare and MAX is the maximum value a pixel can take on.

And for MAX being one in our case we get:

$$PSNR(x, y) = – 10 \cdot log_{10}(MSE(x, y))$$

Subsequently, it’s now time to implement this measure as well:

PSNR <- function(object, other = NULL){
mse <- MSE(object, other)
return(-10 * log(mse, base = 10))
}


That was pretty straightforward. So let’s test it!

PSNR(owlNoise)

## [1] 12.56096


It’s a relatively low value, that indicates a lot of noise. In my opinion this value is more intuitive, probably because of use of the logarithm function. Indeed the MSE value alone didn’t seem that big.

But that’s it for now. There will be probably further stuff next week. But probably not that much.
Thanks for reading and have nice remaining weekend. 🙂

## Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageNoise.R.
But as I will expand the code, it will also probably grow there.

0

# Let’s Add Some More Noise

Like in my yesterday’s post mentioned, there are more kinds of noise. So let’s first load today’s picture and then add some multiplicative noise!

library(jpeg)
pigeonBW <- pigeon[,,1]
plot(as.raster(pigeonBW))


Oh look it’s a pigeon on a lantern. I wonder what this little guy was thinking? But let’s not dwell on this. Rather let’s think how we can scramble up this picture!
I will first try multiplicative normal distributed noise, which is pretty straight forward, if you followed yesterday’s post.

set.seed(1)
pigeonNormalNoise <- pigeonBW * (1 + rnorm(length(pigeonBW), sd = sd(pigeonBW)))
pigeonNormalNoise[pigeonNormalNoise > 1] <- 1
pigeonNormalNoise[pigeonNormalNoise < 0] <- 0
plot(as.raster(pigeonNormalNoise))


Can you see the difference to the additive noise from yesterday? Indeed the darker areas of the pictures seem nearly untouched from the noise. That’s because the intensity of the noise is now dependent on the intensity of the pixel. And intensity is what dark pixels are lacking… So to speak.

But it’s really annoying and redundant to have to write the same stuff over and over again, isn’t it? So let’s do, what a Computer Scientist is best in. Let’s generalize some methods.
But where to start?

In my view it would be useful to have some kind of an (black and white) image class, which saves the image and operations on it. Hence the process would also be reproducible, which is always nice.

imageBW <- setClass("imageBW", slots=list(original="matrix", current="matrix", operations="list"))
imageBWFromJpeg <-function(pathToJpeg){
require(jpeg)
imageBW <- image[,,1]
return(new("imageBW", original = imageBW, current = imageBW, operations = list()))
}
plot.imageBW <- function(object){
plot(as.raster(object@current))}


As an illustration I also overwrote the default plot function. This makes our lives even easier. In the next step I’ll implement the different noise functions. There are probably more generalized ways of doing so, but for now this will suffice. I also haven’t come up with a good way of storing the done operations yet. So I’ll probably also do that another day.

cropPixels<- function(object){
object@current[object@current > 1] <- 1
object@current[object@current < 0] <- 0
return(object)
}
slot(object, "current") <- slot(object, "current") +
runif(length(slot(object, "current")), min = -1, max = 1)
return(cropPixels(object))
}
addNormalNoise <- function(object, sd = NULL){
if(is.null(sd)){
object@current <- object@current + rnorm(length(object@current), sd = sd(object@current))
}else{
object@current <- object@current + rnorm(length(object@current), sd = sd)
}
return(cropPixels(object))
}
multiplyUnifNoise <- function(object){
object@current <- object@current * (1 + runif(length(object@current), min = -1, max = 1))
return(cropPixels(object))
}
multiplyNormalNoise <- function(object, sd = NULL){
if(is.null(sd)){
object@current <- object@current * ( 1 + rnorm(length(object@current),
sd = sd(object@current)))
}else{
object@current <- object@current * ( 1 + rnorm(length(object@current),
sd = sd))
}
return(cropPixels(object))
}


For the future I would also like to have this working with infix operators. Meaning that I could do stuff like image <- image + normalNoise(...) or image <- image * (1 + normalNoise(...)) so that I have a neat little grammar for working with images. However for the moment those functions will do the job. Now let us make use of the newly implemented methods and add a lot of noise to the picture.

image <- imageBWFromJpeg("pigeon.jpg")
image <- multiplyUnifNoise(image)
plot(image)


Haha, there’s not much left of this poor pigeons. But you can still see its and the lamp’s outlines. And I’m anyway quite certain that there are algorithms out there, that could reconstruct a lot of the original image. You will see that later on. 🙂

## Availability Of The Code

You can access a maintained version of the code in my GitHub repository Raspository under R/imageOneChannel.R.
But as I will expand the code, it will also probably grow there.

0

# Or Let’s Make Some Noise!!

As I wrote in one of my last posts there will be some content about image processing and computer vision. And following my second lecture of it, here we go! Fortunately R has some capabilities of handling pictures, of which I will take use.

For now the topic will just be, how to add noise to images, which can be useful to you, e.g. if you want to prove the performance of an algorithm for handling noise. So we’re simulating noise. Natural noise in images can come from different sources… The most common of it would be due to high ISO values or/and long exposure times. So in general in low light conditions.
However I’ll only work with black and white pictures for today. Maybe color comes at some point in the future. 🙂

First I’ll show you how to load pictures into R and how to print them again. It’s quite easy tbh:

library(jpeg)
class(goose)

## [1] "array"

dim(goose)

## [1] 600 897   3


As the picture is black and white we only need one dimension of it. And we can just plot it by transforming it into a raster object:

gooseBW <- goose[,,1]
plot(as.raster(gooseBW))


Cute this goose, isn’t it? 🙂 Now let’s destroy this picture with noise! Nyehehehehe1.!

To begin with I will add some uniform distributed noise. As the values in the matrix are between 0 and 1, I will add values between -1 and 1

set.seed(1)
gooseUniformNoise <- gooseBW + runif(length(gooseBW), min = -1, max = 1)


But we can’t plot this right, as the matrix now also contains values above 1 and below 0. I’ll fix this by cropping the boundaries and then plot.

gooseUniformNoise[gooseUniformNoise > 1] <- 1
gooseUniformNoise[gooseUniformNoise < 0] <- 0
plot(as.raster(gooseUniformNoise))


Interesting how you can still make out the goose in spite of the noise? Now let’s do the same with normal distributed noise. I’ll take the standard deviation of the black and white picture.

gooseNormalNoise <- gooseBW + rnorm(length(gooseBW), sd = sd(gooseBW))
gooseNormalNoise[gooseNormalNoise > 1] <- 1
gooseNormalNoise[gooseNormalNoise < 0] <- 0
plot(as.raster(gooseNormalNoise))


In the lecture we also did multiplicative and impulse noise. However I won’t do them in this post, maybe another time.
Anyway, do you already see, why I like R so much? 🙂 If you do Linear Algebra and Statistics stuff it’s just super comfy!
I’ll show you probably even more reasons why, when I demonstrate you, how to denoise picture again. But that’s it for today.

Have a nice weekend!