Generating Abstract images using Neural Networks

Bluetick Consultants Inc.
4 min readFeb 14, 2023

--

Artificial neural networks are comprised of node layers containing an input layer, one or more hidden layers, and an output layer. Each node connects to another and has an associated weight and threshold value.

For generating random colored images we will develop an ANN architecture that takes each pixel value as an input. All inputs are modified by weight and summed. For handling the n-dimensional data as an input we can use the techniques mentioned below.

Abstract images generated using Neural Networks

Techniques used

● NumPy
● Statistics
● Activation functions
● OpenCV

Selecting the size of an image as an input in the form of height and weight ex: image width = 216 and image height = 216 which gives us a black image.

select image width and height

Once we are representing a greyscale image it becomes and 2-d array format data. But when we are using a colored image 3-d array format data will be created.

For generating the random images we need to convert the 2-d image greyscale data into the 3-d format. We can collect every pixel on the x and y-axis and copy them in 3-dimensional formats. This converts 2-d data into 3-d data and we need to specify which color mode to use and whether to use the Alpha channel or not.

Neural Network Architecture

For creating a Neural Network we need to define input data, Hidden layers, and the number of neurons per layer, and the Activation Function.
Suppose we have (512,512) images. what neural network does is will collect every pixel value fun { i, j } to computer r, g, b, a values.
So total we are having 5 input dimension data. Each pixel converts into 5 different values to generate r, g, b, a, and bias values.

value = min(image height and image width)
input1 = i / value - 0.5
input2 = j / value - 0.5
Z = sqt(input1 + input2)
Z1 = random value from -1 to 1 Alpha
Z2 = random value from -1 to 1 Bias

These input values were added with some random weights along with the activation function for each neuron. On selecting the color mode the values get changes for the output layer.

Neural Network Architecture

Activation Functions used

● Sigmoid
● relu
● Softmax
● Tanh

Sigmoid Activation Function

Sigmoid activation function range is from 0 to 1 and their derivative range is from 0 to 0.25 so there is a vanishing gradient problem if we use the sigmoid activation function in the hidden layers.

Sigmoid Activation Function

Tanh Activation Function

Tanh activation function range is from -1 to 1 . since the range is quite high we can overcome the vanishing gradient problem if we need to update the weights using backpropagation.

Tanh Activation Function

Images generated with RGB color mode

Images generated with HSV and HSL color Mode

Images were generated using CMYK color mode

--

--

Bluetick Consultants Inc.
Bluetick Consultants Inc.

Written by Bluetick Consultants Inc.

Bluetick Consultants Inc: Driving Digital Transformation with Innovations like Generative AI, Cloud Migration, Talent Augmentation & More.

No responses yet