Image convolution

From Rosetta Code
Revision as of 18:21, 23 November 2017 by Wherrera (talk | contribs)
Task
Image convolution
You are encouraged to solve this task according to the task description, using any language you may know.

One class of image digital filters is described by a rectangular matrix of real coefficients called kernel convoluted in a sliding window of image pixels. Usually the kernel is square , where k, l are in the range -R,-R+1,..,R-1,R. W=2R+1 is the kernel width. The filter determines the new value of a monochromatic image pixel Pij as a convolution of the image pixels in the window centered in i, j and the kernel values:

Color images are usually split into the channels which are filtered independently. A color model can be changed as well, i.e. filtration is performed not necessarily in RGB. Common kernels sizes are 3x3 and 5x5. The complexity of filtrating grows quadratically (O(n2)) with the kernel width.

Task: Write a generic convolution 3x3 kernel filter. Optionally show some end user filters that use this generic one.

(You can use, to test the functions below, these input and output solutions.)

Ada

First we define floating-point stimulus and color pixels which will be then used for filtration: <lang ada>type Float_Luminance is new Float;

type Float_Pixel is record

  R, G, B : Float_Luminance := 0.0;

end record;

function "*" (Left : Float_Pixel; Right : Float_Luminance) return Float_Pixel is

  pragma Inline ("*");

begin

  return (Left.R * Right, Left.G * Right, Left.B * Right);

end "*";

function "+" (Left, Right : Float_Pixel) return Float_Pixel is

  pragma Inline ("+");

begin

  return (Left.R + Right.R, Left.G + Right.G, Left.B + Right.B);

end "+";

function To_Luminance (X : Float_Luminance) return Luminance is

  pragma Inline (To_Luminance);

begin

  if X <= 0.0 then
     return 0;
  elsif X >= 255.0 then
     return 255;
  else
     return Luminance (X);
  end if;

end To_Luminance;

function To_Pixel (X : Float_Pixel) return Pixel is

  pragma Inline (To_Pixel);

begin

  return (To_Luminance (X.R), To_Luminance (X.G), To_Luminance (X.B));

end To_Pixel;</lang> Float_Luminance is an unconstrained equivalent of Luminance. Float_Pixel is one to Pixel. Conversion operations To_Luminance and To_Pixel saturate the corresponding values. The operation + is defined per channels. The operation * is defined as multiplying by a scalar. (I.e. Float_Pixel is a vector space.)

Now we are ready to implement the filter. The operation is performed in memory. The access to the image array is minimized using a slid window. The filter is in fact a triplet of filters handling each image channel independently. It can be used with other color models as well. <lang ada>type Kernel_3x3 is array (-1..1, -1..1) of Float_Luminance;

procedure Filter (Picture : in out Image; K : Kernel_3x3) is

  function Get (I, J : Integer) return Float_Pixel is
     pragma Inline (Get);
  begin
     if I in Picture'Range (1) and then J in Picture'Range (2) then
        declare
           Color : Pixel := Picture (I, J);
        begin
           return (Float_Luminance (Color.R), Float_Luminance (Color.G), Float_Luminance (Color.B));
        end;
     else
        return (others => 0.0);
     end if;
  end Get;
  W11, W12, W13 : Float_Pixel; -- The image window
  W21, W22, W23 : Float_Pixel;
  W31, W32, W33 : Float_Pixel;
  Above : array (Picture'First (2) - 1..Picture'Last (2) + 1) of Float_Pixel;
  This  : Float_Pixel;

begin

  for I in Picture'Range (1) loop
     W11 := Above (Picture'First (2) - 1); -- The upper row is taken from the cache
     W12 := Above (Picture'First (2)    );
     W13 := Above (Picture'First (2) + 1);
     W21 := (others => 0.0);               -- The middle row
     W22 := Get (I, Picture'First (2)    );
     W23 := Get (I, Picture'First (2) + 1);
     W31 := (others => 0.0);               -- The bottom row
     W32 := Get (I+1, Picture'First (2)    );
     W33 := Get (I+1, Picture'First (2) + 1);
     for J in Picture'Range (2) loop
        This :=
           W11 * K (-1, -1) + W12 * K (-1, 0) + W13 * K (-1, 1) +
           W21 * K ( 0, -1) + W22 * K ( 0, 0) + W23 * K ( 0, 1) +
           W31 * K ( 1, -1) + W32 * K ( 1, 0) + W33 * K ( 1, 1);
        Above (J-1) := W21;
        W11 := W12; W12 := W13; W13 := Above (J+1);     -- Shift the window
        W21 := W22; W22 := W23; W23 := Get (I,   J+1);
        W31 := W32; W32 := W23; W33 := Get (I+1, J+1);
        Picture (I, J) := To_Pixel (This);
     end loop;
     Above (Picture'Last (2)) := W21;
  end loop;

end Filter;</lang> Example of use: <lang ada> F1, F2 : File_Type; begin

  Open (F1, In_File, "city.ppm");
  declare
     X : Image := Get_PPM (F1);
  begin
     Close (F1);
     Create (F2, Out_File, "city_sharpen.ppm");
     Filter (X, ((-1.0, -1.0, -1.0), (-1.0, 9.0, -1.0), (-1.0, -1.0, -1.0)));
     Put_PPM (F2, X);
  end;
  Close (F2);</lang>

BBC BASIC

<lang bbcbasic> Width% = 200

     Height% = 200
     
     DIM out&(Width%-1, Height%-1, 2)
     
     VDU 23,22,Width%;Height%;8,16,16,128
     *DISPLAY Lena
     OFF
     
     DIM filter%(2, 2)
     filter%() = -1, -1, -1, -1, 12, -1, -1, -1, -1
     
     REM Do the convolution:
     FOR Y% = 1 TO Height%-2
       FOR X% = 1 TO Width%-2
         R% = 0 : G% = 0 : B% = 0
         FOR I% = -1 TO 1
           FOR J% = -1 TO 1
             C% = TINT((X%+I%)*2, (Y%+J%)*2)
             F% = filter%(I%+1,J%+1)
             R% += F% * (C% AND &FF)
             G% += F% * (C% >> 8 AND &FF)
             B% += F% * (C% >> 16)
           NEXT
         NEXT
         IF R% < 0 R% = 0 ELSE IF R% > 1020 R% = 1020
         IF G% < 0 G% = 0 ELSE IF G% > 1020 G% = 1020
         IF B% < 0 B% = 0 ELSE IF B% > 1020 B% = 1020
         out&(X%, Y%, 0) = R% / 4 + 0.5
         out&(X%, Y%, 1) = G% / 4 + 0.5
         out&(X%, Y%, 2) = B% / 4 + 0.5
       NEXT
     NEXT Y%
     
     REM Display:
     GCOL 1
     FOR Y% = 0 TO Height%-1
       FOR X% = 0 TO Width%-1
         COLOUR 1, out&(X%,Y%,0), out&(X%,Y%,1), out&(X%,Y%,2)
         LINE X%*2,Y%*2,X%*2,Y%*2
       NEXT
     NEXT Y%
     
     REPEAT
       WAIT 1
     UNTIL FALSE</lang>

C

Interface:

<lang c>image filter(image img, double *K, int Ks, double, double);</lang>

The implementation (the Ks argument is so that 1 specifies a 3×3 matrix, 2 a 5×5 matrix ... N a (2N+1)×(2N+1) matrix).

<lang c>#include "imglib.h"

inline static color_component GET_PIXEL_CHECK(image img, int x, int y, int l) {

 if ( (x<0) || (x >= img->width) || (y<0) || (y >= img->height) ) return 0;
 return GET_PIXEL(img, x, y)[l];

}

image filter(image im, double *K, int Ks, double divisor, double offset) {

 image oi;
 unsigned int ix, iy, l;
 int kx, ky;
 double cp[3];
 oi = alloc_img(im->width, im->height);
 if ( oi != NULL ) {
   for(ix=0; ix < im->width; ix++) {
     for(iy=0; iy < im->height; iy++) {

cp[0] = cp[1] = cp[2] = 0.0; for(kx=-Ks; kx <= Ks; kx++) { for(ky=-Ks; ky <= Ks; ky++) { for(l=0; l<3; l++) cp[l] += (K[(kx+Ks) +

                       (ky+Ks)*(2*Ks+1)]/divisor) *
                       ((double)GET_PIXEL_CHECK(im, ix+kx, iy+ky, l)) + offset;

} } for(l=0; l<3; l++) cp[l] = (cp[l]>255.0) ? 255.0 : ((cp[l]<0.0) ? 0.0 : cp[l]) ; put_pixel_unsafe(oi, ix, iy, (color_component)cp[0], (color_component)cp[1], (color_component)cp[2]);

     }
   }
   return oi;
 }
 return NULL;

}</lang>

Usage example:

The read_image function is from here.

<lang c>#include <stdio.h>

  1. include "imglib.h"

const char *input = "Lenna100.jpg"; const char *output = "filtered_lenna%d.ppm";

double emboss_kernel[3*3] = {

 -2., -1.,  0.,
 -1.,  1.,  1.,
 0.,  1.,  2.,

};

double sharpen_kernel[3*3] = {

 -1.0, -1.0, -1.0,
 -1.0,  9.0, -1.0,
 -1.0, -1.0, -1.0

}; double sobel_emboss_kernel[3*3] = {

 -1., -2., -1.,
 0.,  0.,  0.,
 1.,  2.,  1.,

}; double box_blur_kernel[3*3] = {

 1.0, 1.0, 1.0,
 1.0, 1.0, 1.0,
 1.0, 1.0, 1.0,

};

double *filters[4] = {

 emboss_kernel, sharpen_kernel, sobel_emboss_kernel, box_blur_kernel

}; const double filter_params[2*4] = {

 1.0, 0.0,
 1.0, 0.0,
 1.0, 0.5,
 9.0, 0.0

};

int main() {

 image ii, oi;
 int i;
 char lennanames[30];
 ii = read_image(input);
 if ( ii != NULL ) {
   for(i=0; i<4; i++) {
     sprintf(lennanames, output, i);
     oi = filter(ii, filters[i], 1, filter_params[2*i], filter_params[2*i+1]);
     if ( oi != NULL ) {

FILE *outfh = fopen(lennanames, "w"); if ( outfh != NULL ) { output_ppm(outfh, oi); fclose(outfh); } else { fprintf(stderr, "out err %s\n", output); } free_img(oi);

     } else { fprintf(stderr, "err creating img filters %d\n", i); }
   }
   free_img(ii);
 } else { fprintf(stderr, "err reading %s\n", input); }

}</lang>

D

This requires the module from the Grayscale Image Task. <lang d>import std.string, std.math, std.algorithm, grayscale_image;

struct ConvolutionFilter {

   double[][] kernel;
   double divisor, offset_;
   string name;

}


Image!Color convolve(Color)(in Image!Color im,

                           in ConvolutionFilter filter)

pure nothrow in {

   assert(im !is null);
   assert(!filter.divisor.isNaN && !filter.offset_.isNaN);
   assert(filter.divisor != 0);
   assert(filter.kernel.length > 0 && filter.kernel[0].length > 0);
   foreach (const row; filter.kernel) // Is rectangular.
       assert(row.length == filter.kernel[0].length);
   assert(filter.kernel.length % 2 == 1); // Odd sized kernel.
   assert(filter.kernel[0].length % 2 == 1);
   assert(im.ny >= filter.kernel.length);
   assert(im.nx >= filter.kernel[0].length);

} out(result) {

   assert(result !is null);
   assert(result.nx == im.nx && result.ny == im.ny);

} body {

   immutable knx2 = filter.kernel[0].length / 2;
   immutable kny2 = filter.kernel.length / 2;
   auto io = new Image!Color(im.nx, im.ny);
   static if (is(Color == RGB))
       alias CT = typeof(Color.r); // Component type.
   else static if (is(typeof(Color.c)))
       alias CT = typeof(Color.c);
   else
       alias CT = Color;
   foreach (immutable y; kny2 .. im.ny - kny2) {
       foreach (immutable x; knx2 .. im.nx - knx2) {
           static if (is(Color == RGB))
               double[3] total = 0.0;
           else
               double total = 0.0;
           foreach (immutable sy, const kRow; filter.kernel) {
               foreach (immutable sx, immutable k; kRow) {
                   immutable p = im[x + sx - knx2, y + sy - kny2];
                   static if (is(Color == RGB)) {
                       total[0] += p.r * k;
                       total[1] += p.g * k;
                       total[2] += p.b * k;
                   } else {
                       total += p * k;
                   }
               }
           }
           immutable D = filter.divisor;
           immutable O = filter.offset_ * CT.max;
           static if (is(Color == RGB)) {
               io[x, y] = Color(
                   cast(CT)min(max(total[0]/ D + O, CT.min), CT.max),
                   cast(CT)min(max(total[1]/ D + O, CT.min), CT.max),
                   cast(CT)min(max(total[2]/ D + O, CT.min), CT.max));
           } else static if (is(typeof(Color.c))) {
               io[x, y] = Color(
                   cast(CT)min(max(total / D + O, CT.min), CT.max));
           } else {
               // If Color doesn't have a 'c' field, then Color is
               // assumed to be a built-in type.
               io[x, y] =
                   cast(CT)min(max(total / D + O, CT.min), CT.max);
           }
       }
   }
   return io;

}


void main() {

   immutable ConvolutionFilter[] filters = [
       {[[-2.0, -1.0, 0.0],
         [-1.0,  1.0, 1.0],
         [ 0.0,  1.0, 2.0]], divisor:1.0, offset_:0.0, name:"Emboss"},
       {[[-1.0, -1.0, -1.0],
         [-1.0,  9.0, -1.0],
         [-1.0, -1.0, -1.0]], divisor:1.0, 0.0, "Sharpen"},
       {[[-1.0, -2.0, -1.0],
         [ 0.0,  0.0,  0.0],
         [ 1.0,  2.0,  1.0]], divisor:1.0, 0.5, "Sobel_emboss"},
       {[[1.0, 1.0, 1.0],
         [1.0, 1.0, 1.0],
         [1.0, 1.0, 1.0]], divisor:9.0, 0.0, "Box_blur"},
       {[[1,  4,  7,  4, 1],
         [4, 16, 26, 16, 4],
         [7, 26, 41, 26, 7],
         [4, 16, 26, 16, 4],
         [1,  4,  7,  4, 1]], divisor:273, 0.0, "Gaussian_blur"}];
   Image!RGB im;
   im.loadPPM6("Lenna100.ppm");
   foreach (immutable filter; filters)
       im.convolve(filter)
       .savePPM6(format("lenna_%s.ppm", filter.name));
   const img = im.rgb2grayImage();
   foreach (immutable filter; filters)
       img.convolve(filter)
       .savePGM(format("lenna_gray_%s.ppm", filter.name));

}</lang>

Go

Using standard image library: <lang go>package main

import (

   "fmt"
   "image"
   "image/color"
   "image/jpeg"
   "math"
   "os"

)

// kf3 is a generic convolution 3x3 kernel filter that operatates on // images of type image.Gray from the Go standard image library. func kf3(k *[9]float64, src, dst *image.Gray) {

   for y := src.Rect.Min.Y; y < src.Rect.Max.Y; y++ {
       for x := src.Rect.Min.X; x < src.Rect.Max.X; x++ {
           var sum float64
           var i int
           for yo := y - 1; yo <= y+1; yo++ {
               for xo := x - 1; xo <= x+1; xo++ {
                   if (image.Point{xo, yo}).In(src.Rect) {
                       sum += k[i] * float64(src.At(xo, yo).(color.Gray).Y)
                   } else {
                       sum += k[i] * float64(src.At(x, y).(color.Gray).Y)
                   }
                   i++
               }
           }
           dst.SetGray(x, y,
               color.Gray{uint8(math.Min(255, math.Max(0, sum)))})
       }
   }

}

var blur = [9]float64{

   1. / 9, 1. / 9, 1. / 9,
   1. / 9, 1. / 9, 1. / 9,
   1. / 9, 1. / 9, 1. / 9}

// blurY example function applies blur kernel to Y channel // of YCbCr image using generic kernel filter function kf3 func blurY(src *image.YCbCr) *image.YCbCr {

   dst := *src
   // catch zero-size image here
   if src.Rect.Max.X == src.Rect.Min.X || src.Rect.Max.Y == src.Rect.Min.Y {
       return &dst
   }
   // pass Y channels as gray images
   srcGray := image.Gray{src.Y, src.YStride, src.Rect}
   dstGray := srcGray
   dstGray.Pix = make([]uint8, len(src.Y))
   kf3(&blur, &srcGray, &dstGray) // call generic convolution function
   // complete result
   dst.Y = dstGray.Pix                   // convolution result
   dst.Cb = append([]uint8{}, src.Cb...) // Cb, Cr are just copied
   dst.Cr = append([]uint8{}, src.Cr...)
   return &dst

}

func main() {

   // Example file used here is Lenna100.jpg from the task "Percentage
   // difference between images"
   f, err := os.Open("Lenna100.jpg")
   if err != nil {
       fmt.Println(err)
       return
   }
   img, err := jpeg.Decode(f)
   if err != nil {
       fmt.Println(err)
       return
   }
   f.Close()
   y, ok := img.(*image.YCbCr)
   if !ok {
       fmt.Println("expected color jpeg")
       return
   }
   f, err = os.Create("blur.jpg")
   if err != nil {
       fmt.Println(err)
       return
   }
   err = jpeg.Encode(f, blurY(y), &jpeg.Options{90})
   if err != nil {
       fmt.Println(err)
   }

}</lang> Alternative version, building on code from bitmap task.

New function for raster package: <lang go>package raster

import "math"

func (g *Grmap) KernelFilter3(k []float64) *Grmap {

   if len(k) != 9 {
       return nil
   }
   r := NewGrmap(g.cols, g.rows)
   r.Comments = append([]string{}, g.Comments...)
   // Filter edge pixels with minimal code.
   // Execution time per pixel is high but there are few edge pixels
   // relative to the interior.
   o3 := [][]int{
       {-1, -1}, {0, -1}, {1, -1},
       {-1, 0}, {0, 0}, {1, 0},
       {-1, 1}, {0, 1}, {1, 1}}
   edge := func(x, y int) uint16 {
       var sum float64
       for i, o := range o3 {
           c, ok := g.GetPx(x+o[0], y+o[1])
           if !ok {
               c = g.pxRow[y][x]
           }
           sum += float64(c) * k[i]
       }
       return uint16(math.Min(math.MaxUint16, math.Max(0,sum)))
   }
   for x := 0; x < r.cols; x++ {
       r.pxRow[0][x] = edge(x, 0)
       r.pxRow[r.rows-1][x] = edge(x, r.rows-1)
   }
   for y := 1; y < r.rows-1; y++ {
       r.pxRow[y][0] = edge(0, y)
       r.pxRow[y][r.cols-1] = edge(r.cols-1, y)
   }
   if r.rows < 3 || r.cols < 3 {
       return r
   }
   // Interior pixels can be filtered much more efficiently.
   otr := -g.cols + 1
   obr := g.cols + 1
   z := g.cols + 1
   c2 := g.cols - 2
   for y := 1; y < r.rows-1; y++ {
       tl := float64(g.pxRow[y-1][0])
       tc := float64(g.pxRow[y-1][1])
       tr := float64(g.pxRow[y-1][2])
       ml := float64(g.pxRow[y][0])
       mc := float64(g.pxRow[y][1])
       mr := float64(g.pxRow[y][2])
       bl := float64(g.pxRow[y+1][0])
       bc := float64(g.pxRow[y+1][1])
       br := float64(g.pxRow[y+1][2])
       for x := 1; ; x++ {
           r.px[z] = uint16(math.Min(math.MaxUint16, math.Max(0,
               tl*k[0] + tc*k[1] + tr*k[2] +
               ml*k[3] + mc*k[4] + mr*k[5] +
               bl*k[6] + bc*k[7] + br*k[8])))
           if x == c2 {
               break
           }
           z++
           tl, tc, tr = tc, tr, float64(g.px[z+otr])
           ml, mc, mr = mc, mr, float64(g.px[z+1])
           bl, bc, br = bc, br, float64(g.px[z+obr])
       }
       z += 3
   }
   return r

}</lang> Demonstration program: <lang go>package main

// Files required to build supporting package raster are found in: // * This task (immediately above) // * Bitmap // * Grayscale image // * Read a PPM file // * Write a PPM file

import (

   "fmt"
   "raster"

)

var blur = []float64{

   1./9, 1./9, 1./9,
   1./9, 1./9, 1./9,
   1./9, 1./9, 1./9}

var sharpen = []float64{

   -1, -1, -1,
   -1,  9, -1,
   -1, -1, -1}

func main() {

   // Example file used here is Lenna100.jpg from the task "Percentage
   // difference between images" converted with with the command
   // convert Lenna100.jpg -colorspace gray Lenna100.ppm
   b, err := raster.ReadPpmFile("Lenna100.ppm")
   if err != nil {
       fmt.Println(err)
       return
   }
   g0 := b.Grmap()
   g1 := g0.KernelFilter3(blur)
   err = g1.Bitmap().WritePpmFile("blur.ppm")
   if err != nil {
       fmt.Println(err)
   }

}</lang>

J

<lang J>NB. pad the edges of an array with border pixels NB. (increasing the first two dimensions by 1 less than the kernel size) pad=: adverb define

 'a b'=. (<. ,. >.) 0.5 0.5 p. $m
 a"_`(0 , ] - 1:)`(# 1:)}~&# # b"_`(0 , ] - 1:)`(# 1:)}~&(1 { $) #"1 ]

)

kernel_filter=: adverb define

  ($m)+/ .*&(,m)&(,/);._3 m pad

)</lang>


This code assumes that the leading dimensions of the array represent pixels and any trailing dimensions represent structure to be preserved (this is a fairly common approach and matches the J implementation at Basic bitmap storage). Note also that we assume that the image is larger than a single pixel in both directions. Any sized kernel is supported (as long as it's at least one pixel in each direction).

Example use:

<lang J> NB. kernels borrowed from C and TCL implementations

  sharpen_kernel=: _1+10*4=i.3 3
  blur_kernel=: 3 3$%9
  emboss_kernel=: _2 _1 0,_1 1 1,:0 1 2
  sobel_emboss_kernel=: _1 _2 _1,0,:1 2 1
  'blurred.ppm' writeppm~ blur_kernel kernel_filter readppm 'original.ppm'</lang>

Java

Code: <lang Java>import java.awt.image.*; import java.io.File; import java.io.IOException; import javax.imageio.*;

public class ImageConvolution {

 public static class ArrayData
 {
   public final int[] dataArray;
   public final int width;
   public final int height;
   
   public ArrayData(int width, int height)
   {
     this(new int[width * height], width, height);
   }
   
   public ArrayData(int[] dataArray, int width, int height)
   {
     this.dataArray = dataArray;
     this.width = width;
     this.height = height;
   }
   
   public int get(int x, int y)
   {  return dataArray[y * width + x];  }
   
   public void set(int x, int y, int value)
   {  dataArray[y * width + x] = value;  }
 }
 
 private static int bound(int value, int endIndex)
 {
   if (value < 0)
     return 0;
   if (value < endIndex)
     return value;
   return endIndex - 1;
 }
 
 public static ArrayData convolute(ArrayData inputData, ArrayData kernel, int kernelDivisor)
 {
   int inputWidth = inputData.width;
   int inputHeight = inputData.height;
   int kernelWidth = kernel.width;
   int kernelHeight = kernel.height;
   if ((kernelWidth <= 0) || ((kernelWidth & 1) != 1))
     throw new IllegalArgumentException("Kernel must have odd width");
   if ((kernelHeight <= 0) || ((kernelHeight & 1) != 1))
     throw new IllegalArgumentException("Kernel must have odd height");
   int kernelWidthRadius = kernelWidth >>> 1;
   int kernelHeightRadius = kernelHeight >>> 1;
   
   ArrayData outputData = new ArrayData(inputWidth, inputHeight);
   for (int i = inputWidth - 1; i >= 0; i--)
   {
     for (int j = inputHeight - 1; j >= 0; j--)
     {
       double newValue = 0.0;
       for (int kw = kernelWidth - 1; kw >= 0; kw--)
         for (int kh = kernelHeight - 1; kh >= 0; kh--)
           newValue += kernel.get(kw, kh) * inputData.get(
                         bound(i + kw - kernelWidthRadius, inputWidth),
                         bound(j + kh - kernelHeightRadius, inputHeight));
       outputData.set(i, j, (int)Math.round(newValue / kernelDivisor));
     }
   }
   return outputData;
 }
 
 public static ArrayData[] getArrayDatasFromImage(String filename) throws IOException
 {
   BufferedImage inputImage = ImageIO.read(new File(filename));
   int width = inputImage.getWidth();
   int height = inputImage.getHeight();
   int[] rgbData = inputImage.getRGB(0, 0, width, height, null, 0, width);
   ArrayData reds = new ArrayData(width, height);
   ArrayData greens = new ArrayData(width, height);
   ArrayData blues = new ArrayData(width, height);
   for (int y = 0; y < height; y++)
   {
     for (int x = 0; x < width; x++)
     {
       int rgbValue = rgbData[y * width + x];
       reds.set(x, y, (rgbValue >>> 16) & 0xFF);
       greens.set(x, y, (rgbValue >>> 8) & 0xFF);
       blues.set(x, y, rgbValue & 0xFF);
     }
   }
   return new ArrayData[] { reds, greens, blues };
 }
 
 public static void writeOutputImage(String filename, ArrayData[] redGreenBlue) throws IOException
 {
   ArrayData reds = redGreenBlue[0];
   ArrayData greens = redGreenBlue[1];
   ArrayData blues = redGreenBlue[2];
   BufferedImage outputImage = new BufferedImage(reds.width, reds.height,
                                                 BufferedImage.TYPE_INT_ARGB);
   for (int y = 0; y < reds.height; y++)
   {
     for (int x = 0; x < reds.width; x++)
     {
       int red = bound(reds.get(x, y), 256);
       int green = bound(greens.get(x, y), 256);
       int blue = bound(blues.get(x, y), 256);
       outputImage.setRGB(x, y, (red << 16) | (green << 8) | blue | -0x01000000);
     }
   }
   ImageIO.write(outputImage, "PNG", new File(filename));
   return;
 }
 
 public static void main(String[] args) throws IOException
 {
   int kernelWidth = Integer.parseInt(args[2]);
   int kernelHeight = Integer.parseInt(args[3]);
   int kernelDivisor = Integer.parseInt(args[4]);
   System.out.println("Kernel size: " + kernelWidth + "x" + kernelHeight +
                      ", divisor=" + kernelDivisor);
   int y = 5;
   ArrayData kernel = new ArrayData(kernelWidth, kernelHeight);
   for (int i = 0; i < kernelHeight; i++)
   {
     System.out.print("[");
     for (int j = 0; j < kernelWidth; j++)
     {
       kernel.set(j, i, Integer.parseInt(args[y++]));
       System.out.print(" " + kernel.get(j, i) + " ");
     }
     System.out.println("]");
   }
   
   ArrayData[] dataArrays = getArrayDatasFromImage(args[0]);
   for (int i = 0; i < dataArrays.length; i++)
     dataArrays[i] = convolute(dataArrays[i], kernel, kernelDivisor);
   writeOutputImage(args[1], dataArrays);
   return;
 }

}</lang>


Output from example pentagon image

Example 5x5 Gaussian blur, using Pentagon.png from the Hough transform task:

java ImageConvolution pentagon.png JavaImageConvolution.png 5 5 273 1 4 7 4 1  4 16 26 16 4  7 26 41 26 7  4 16 26 16 4  1 4 7 4 1
Kernel size: 5x5, divisor=273
[ 1  4  7  4  1 ]
[ 4  16  26  16  4 ]
[ 7  26  41  26  7 ]
[ 4  16  26  16  4 ]
[ 1  4  7  4  1 ]

JavaScript

Code: <lang javascript>// Image imageIn, Array kernel, function (Error error, Image imageOut) // precondition: Image is loaded // returns loaded Image to asynchronous callback function function convolve(imageIn, kernel, callback) {

   var dim = Math.sqrt(kernel.length),
       pad = Math.floor(dim / 2);
   
   if (dim % 2 !== 1) {
       return callback(new RangeError("Invalid kernel dimension"), null);
   }
   
   var w = imageIn.width,
       h = imageIn.height,
       can = document.createElement('canvas'),
       cw,
       ch,
       ctx,
       imgIn, imgOut,
       datIn, datOut;
   
   can.width = cw = w + pad * 2; // add padding
   can.height = ch = h + pad * 2; // add padding
   
   ctx = can.getContext('2d');
   ctx.fillStyle = '#000'; // fill with opaque black
   ctx.fillRect(0, 0, cw, ch);
   ctx.drawImage(imageIn, pad, pad);
   
   imgIn = ctx.getImageData(0, 0, cw, ch);
   datIn = imgIn.data;
   
   imgOut = ctx.createImageData(w, h);
   datOut = imgOut.data;
   
   var row, col, pix, i, dx, dy, r, g, b;
   
   for (row = pad; row <= h; row++) {
       for (col = pad; col <= w; col++) {
           r = g = b = 0;
           
           for (dx = -pad; dx <= pad; dx++) {
               for (dy = -pad; dy <= pad; dy++) {
                   i = (dy + pad) * dim + (dx + pad); // kernel index
                   pix = 4 * ((row + dy) * cw + (col + dx)); // image index
                   r += datIn[pix++] * kernel[i];
                   g += datIn[pix++] * kernel[i];
                   b += datIn[pix  ] * kernel[i];
               }
           }
           
           pix = 4 * ((row - pad) * w + (col - pad)); // destination index
           datOut[pix++] = (r + .5) ^ 0;
           datOut[pix++] = (g + .5) ^ 0;
           datOut[pix++] = (b + .5) ^ 0;
           datOut[pix  ] = 255; // we want opaque image
       }
   }
   
   // reuse canvas
   can.width = w;
   can.height = h;
   
   ctx.putImageData(imgOut, 0, 0);
   
   var imageOut = new Image();
   
   imageOut.addEventListener('load', function () {
       callback(null, imageOut);
   });
   
   imageOut.addEventListener('error', function (error) {
       callback(error, null);
   });
   
   imageOut.src = can.toDataURL('image/png');

}</lang>

Example Usage:

var image = new Image();

image.addEventListener('load', function () {
    image.alt = 'Player';
    document.body.appendChild(image);
    
    // laplace filter
    convolve(image,
             [0, 1, 0,
              1,-4, 1,
              0, 1, 0],
             function (error, result) {
                 if (error !== null) {
                     console.error(error);
                 } else {
                     result.alt = 'Boundary';
                     document.body.appendChild(result);
                 }
             }
    );
});

image.src = '/img/player.png';

==[[:Category:|]] [[Category:]] {

   if {$dstImage eq ""} {

set dstImage [image create photo]

   }
   set w [image width $srcImage]
   set h [image height $srcImage]
   for {set x 0} {$x < $w} {incr x} {

for {set y 0} {$y < $h} {incr y} { applyKernel $srcImage $x $y -- $kernel -> $dstImage }

   }
   return $dstImage

}

  1. Demonstration code using the teapot image from Tk's widget demo

image create photo teapot -file $tk_library/demos/images/teapot.ppm pack [labelframe .src -text Source] -side left pack [label .src.l -image teapot] foreach {label kernel} {

   Emboss {

{-2. -1. 0.} {-1. 1. 1.} { 0. 1. 2.}

   }
   Sharpen {

{-1. -1. -1} {-1. 9. -1} {-1. -1. -1}

   }
   Blur {

{.1111 .1111 .1111} {.1111 .1111 .1111} {.1111 .1111 .1111}

   }

} {

   set name [string tolower $label]
   update
   pack [labelframe .$name -text $label] -side left
   pack [label .$name.l -image [convolve teapot $kernel]]

}</lang>