Image convolution

You are encouraged to solve this task according to the task description, using any language you may know.
One class of image digital filters is described by a rectangular matrix of real coefficients called kernel convoluted in a sliding window of image pixels. Usually the kernel is square , where k, l are in the range -R,-R+1,..,R-1,R. W=2R+1 is the kernel width. The filter determines the new value of a monochromatic image pixel Pij as a convolution of the image pixels in the window centered in i, j and the kernel values:
Color images are usually split into the channels which are filtered independently. A color model can be changed as well, i.e. filtration is performed not necessarily in RGB. Common kernels sizes are 3x3 and 5x5. The complexity of filtrating grows quadratically (O(n2)) with the kernel width.
Task: Write a generic convolution 3x3 kernel filter. Optionally show some end user filters that use this generic one.
(You can use, to test the functions below, these input and output solutions.)
Ada
First we define floating-point stimulus and color pixels which will be then used for filtration: <lang ada>type Float_Luminance is new Float;
type Float_Pixel is record
R, G, B : Float_Luminance := 0.0;
end record;
function "*" (Left : Float_Pixel; Right : Float_Luminance) return Float_Pixel is
pragma Inline ("*");
begin
return (Left.R * Right, Left.G * Right, Left.B * Right);
end "*";
function "+" (Left, Right : Float_Pixel) return Float_Pixel is
pragma Inline ("+");
begin
return (Left.R + Right.R, Left.G + Right.G, Left.B + Right.B);
end "+";
function To_Luminance (X : Float_Luminance) return Luminance is
pragma Inline (To_Luminance);
begin
if X <= 0.0 then return 0; elsif X >= 255.0 then return 255; else return Luminance (X); end if;
end To_Luminance;
function To_Pixel (X : Float_Pixel) return Pixel is
pragma Inline (To_Pixel);
begin
return (To_Luminance (X.R), To_Luminance (X.G), To_Luminance (X.B));
end To_Pixel;</lang> Float_Luminance is an unconstrained equivalent of Luminance. Float_Pixel is one to Pixel. Conversion operations To_Luminance and To_Pixel saturate the corresponding values. The operation + is defined per channels. The operation * is defined as multiplying by a scalar. (I.e. Float_Pixel is a vector space.)
Now we are ready to implement the filter. The operation is performed in memory. The access to the image array is minimized using a slid window. The filter is in fact a triplet of filters handling each image channel independently. It can be used with other color models as well. <lang ada>type Kernel_3x3 is array (-1..1, -1..1) of Float_Luminance;
procedure Filter (Picture : in out Image; K : Kernel_3x3) is
function Get (I, J : Integer) return Float_Pixel is pragma Inline (Get); begin if I in Picture'Range (1) and then J in Picture'Range (2) then declare Color : Pixel := Picture (I, J); begin return (Float_Luminance (Color.R), Float_Luminance (Color.G), Float_Luminance (Color.B)); end; else return (others => 0.0); end if; end Get; W11, W12, W13 : Float_Pixel; -- The image window W21, W22, W23 : Float_Pixel; W31, W32, W33 : Float_Pixel; Above : array (Picture'First (2) - 1..Picture'Last (2) + 1) of Float_Pixel; This : Float_Pixel;
begin
for I in Picture'Range (1) loop W11 := Above (Picture'First (2) - 1); -- The upper row is taken from the cache W12 := Above (Picture'First (2) ); W13 := Above (Picture'First (2) + 1); W21 := (others => 0.0); -- The middle row W22 := Get (I, Picture'First (2) ); W23 := Get (I, Picture'First (2) + 1); W31 := (others => 0.0); -- The bottom row W32 := Get (I+1, Picture'First (2) ); W33 := Get (I+1, Picture'First (2) + 1); for J in Picture'Range (2) loop This := W11 * K (-1, -1) + W12 * K (-1, 0) + W13 * K (-1, 1) + W21 * K ( 0, -1) + W22 * K ( 0, 0) + W23 * K ( 0, 1) + W31 * K ( 1, -1) + W32 * K ( 1, 0) + W33 * K ( 1, 1); Above (J-1) := W21; W11 := W12; W12 := W13; W13 := Above (J+1); -- Shift the window W21 := W22; W22 := W23; W23 := Get (I, J+1); W31 := W32; W32 := W23; W33 := Get (I+1, J+1); Picture (I, J) := To_Pixel (This); end loop; Above (Picture'Last (2)) := W21; end loop;
end Filter;</lang> Example of use: <lang ada> F1, F2 : File_Type; begin
Open (F1, In_File, "city.ppm"); declare X : Image := Get_PPM (F1); begin Close (F1); Create (F2, Out_File, "city_sharpen.ppm"); Filter (X, ((-1.0, -1.0, -1.0), (-1.0, 9.0, -1.0), (-1.0, -1.0, -1.0))); Put_PPM (F2, X); end; Close (F2);</lang>
BBC BASIC


<lang bbcbasic> Width% = 200
Height% = 200 DIM out&(Width%-1, Height%-1, 2) VDU 23,22,Width%;Height%;8,16,16,128 *DISPLAY Lena OFF DIM filter%(2, 2) filter%() = -1, -1, -1, -1, 12, -1, -1, -1, -1 REM Do the convolution: FOR Y% = 1 TO Height%-2 FOR X% = 1 TO Width%-2 R% = 0 : G% = 0 : B% = 0 FOR I% = -1 TO 1 FOR J% = -1 TO 1 C% = TINT((X%+I%)*2, (Y%+J%)*2) F% = filter%(I%+1,J%+1) R% += F% * (C% AND &FF) G% += F% * (C% >> 8 AND &FF) B% += F% * (C% >> 16) NEXT NEXT IF R% < 0 R% = 0 ELSE IF R% > 1020 R% = 1020 IF G% < 0 G% = 0 ELSE IF G% > 1020 G% = 1020 IF B% < 0 B% = 0 ELSE IF B% > 1020 B% = 1020 out&(X%, Y%, 0) = R% / 4 + 0.5 out&(X%, Y%, 1) = G% / 4 + 0.5 out&(X%, Y%, 2) = B% / 4 + 0.5 NEXT NEXT Y% REM Display: GCOL 1 FOR Y% = 0 TO Height%-1 FOR X% = 0 TO Width%-1 COLOUR 1, out&(X%,Y%,0), out&(X%,Y%,1), out&(X%,Y%,2) LINE X%*2,Y%*2,X%*2,Y%*2 NEXT NEXT Y% REPEAT WAIT 1 UNTIL FALSE</lang>
C
Interface:
<lang c>image filter(image img, double *K, int Ks, double, double);</lang>
The implementation (the Ks argument is so that 1 specifies a 3×3 matrix, 2 a 5×5 matrix ... N a (2N+1)×(2N+1) matrix).
<lang c>#include "imglib.h"
inline static color_component GET_PIXEL_CHECK(image img, int x, int y, int l) {
if ( (x<0) || (x >= img->width) || (y<0) || (y >= img->height) ) return 0; return GET_PIXEL(img, x, y)[l];
}
image filter(image im, double *K, int Ks, double divisor, double offset) {
image oi; unsigned int ix, iy, l; int kx, ky; double cp[3];
oi = alloc_img(im->width, im->height); if ( oi != NULL ) { for(ix=0; ix < im->width; ix++) { for(iy=0; iy < im->height; iy++) {
cp[0] = cp[1] = cp[2] = 0.0; for(kx=-Ks; kx <= Ks; kx++) { for(ky=-Ks; ky <= Ks; ky++) { for(l=0; l<3; l++) cp[l] += (K[(kx+Ks) +
(ky+Ks)*(2*Ks+1)]/divisor) * ((double)GET_PIXEL_CHECK(im, ix+kx, iy+ky, l)) + offset;
} } for(l=0; l<3; l++) cp[l] = (cp[l]>255.0) ? 255.0 : ((cp[l]<0.0) ? 0.0 : cp[l]) ; put_pixel_unsafe(oi, ix, iy, (color_component)cp[0], (color_component)cp[1], (color_component)cp[2]);
} } return oi; } return NULL;
}</lang>
Usage example:
The read_image function is from here.
<lang c>#include <stdio.h>
- include "imglib.h"
const char *input = "Lenna100.jpg"; const char *output = "filtered_lenna%d.ppm";
double emboss_kernel[3*3] = {
-2., -1., 0., -1., 1., 1., 0., 1., 2.,
};
double sharpen_kernel[3*3] = {
-1.0, -1.0, -1.0, -1.0, 9.0, -1.0, -1.0, -1.0, -1.0
}; double sobel_emboss_kernel[3*3] = {
-1., -2., -1., 0., 0., 0., 1., 2., 1.,
}; double box_blur_kernel[3*3] = {
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
};
double *filters[4] = {
emboss_kernel, sharpen_kernel, sobel_emboss_kernel, box_blur_kernel
}; const double filter_params[2*4] = {
1.0, 0.0, 1.0, 0.0, 1.0, 0.5, 9.0, 0.0
};
int main() {
image ii, oi; int i; char lennanames[30];
ii = read_image(input); if ( ii != NULL ) { for(i=0; i<4; i++) { sprintf(lennanames, output, i); oi = filter(ii, filters[i], 1, filter_params[2*i], filter_params[2*i+1]); if ( oi != NULL ) {
FILE *outfh = fopen(lennanames, "w"); if ( outfh != NULL ) { output_ppm(outfh, oi); fclose(outfh); } else { fprintf(stderr, "out err %s\n", output); } free_img(oi);
} else { fprintf(stderr, "err creating img filters %d\n", i); } } free_img(ii); } else { fprintf(stderr, "err reading %s\n", input); }
}</lang>
D
This requires the module from the Grayscale Image Task. <lang d>import std.string, std.math, std.algorithm, grayscale_image;
struct ConvolutionFilter {
double[][] kernel; double divisor, offset_; string name;
}
Image!Color convolve(Color)(in Image!Color im,
in ConvolutionFilter filter)
pure nothrow in {
assert(im !is null); assert(!isnan(filter.divisor) && !isnan(filter.offset_)); assert(filter.divisor != 0); assert(filter.kernel.length > 0 && filter.kernel[0].length > 0); foreach (const row; filter.kernel) // Is rectangular. assert(row.length == filter.kernel[0].length); assert(filter.kernel.length % 2 == 1); // Odd sized kernel. assert(filter.kernel[0].length % 2 == 1); assert(im.ny >= filter.kernel.length); assert(im.nx >= filter.kernel[0].length);
} out(result) {
assert(result !is null); assert(result.nx == im.nx && result.ny == im.ny);
} body {
immutable knx2 = filter.kernel[0].length / 2; immutable kny2 = filter.kernel.length / 2; auto io = new Image!Color(im.nx, im.ny);
static if (is(Color == RGB)) alias CT = typeof(Color.r); // Component type. else static if (is(typeof(Color.c))) alias CT = typeof(Color.c); else alias CT = Color;
foreach (immutable y; kny2 .. im.ny - kny2) { foreach (immutable x; knx2 .. im.nx - knx2) { static if (is(Color == RGB)) double[3] total = 0.0; else double total = 0.0;
foreach (immutable sy, const kRow; filter.kernel) { foreach (immutable sx, immutable k; kRow) { immutable p = im[x + sx - knx2, y + sy - kny2]; static if (is(Color == RGB)) { total[0] += p.r * k; total[1] += p.g * k; total[2] += p.b * k; } else { total += p * k; } } }
immutable D = filter.divisor; immutable O = filter.offset_ * CT.max; static if (is(Color == RGB)) { io[x, y] = Color( cast(CT)min(max(total[0]/ D + O, CT.min), CT.max), cast(CT)min(max(total[1]/ D + O, CT.min), CT.max), cast(CT)min(max(total[2]/ D + O, CT.min), CT.max)); } else static if (is(typeof(Color.c))) { io[x, y] = Color( cast(CT)min(max(total / D + O, CT.min), CT.max)); } else { // If Color doesn't have a 'c' field, then Color is // assumed to be a built-in type. io[x, y] = cast(CT)min(max(total / D + O, CT.min), CT.max); } } }
return io;
}
void main() {
immutable ConvolutionFilter[] filters = [ {[[-2.0, -1.0, 0.0], [-1.0, 1.0, 1.0], [ 0.0, 1.0, 2.0]], divisor:1.0, offset_:0.0, name:"Emboss"},
{[[-1.0, -1.0, -1.0], [-1.0, 9.0, -1.0], [-1.0, -1.0, -1.0]], divisor:1.0, 0.0, "Sharpen"},
{[[-1.0, -2.0, -1.0], [ 0.0, 0.0, 0.0], [ 1.0, 2.0, 1.0]], divisor:1.0, 0.5, "Sobel_emboss"},
{[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], divisor:9.0, 0.0, "Box_blur"},
{[[1, 4, 7, 4, 1], [4, 16, 26, 16, 4], [7, 26, 41, 26, 7], [4, 16, 26, 16, 4], [1, 4, 7, 4, 1]], divisor:273, 0.0, "Gaussian_blur"}];
Image!RGB im; im.loadPPM6("Lenna100.ppm");
foreach (immutable filter; filters) im.convolve(filter) .savePPM6(format("lenna_%s.ppm", filter.name));
const img = im.rgb2grayImage(); foreach (immutable filter; filters) img.convolve(filter) .savePGM(format("lenna_gray_%s.ppm", filter.name));
}</lang>
Go
Using standard image library: <lang go>package main
import (
"fmt" "image" "image/color" "image/jpeg" "math" "os"
)
// kf3 is a generic convolution 3x3 kernel filter that operatates on // images of type image.Gray from the Go standard image library. func kf3(k *[9]float64, src, dst *image.Gray) {
for y := src.Rect.Min.Y; y < src.Rect.Max.Y; y++ { for x := src.Rect.Min.X; x < src.Rect.Max.X; x++ { var sum float64 var i int for yo := y - 1; yo <= y+1; yo++ { for xo := x - 1; xo <= x+1; xo++ { if (image.Point{xo, yo}).In(src.Rect) { sum += k[i] * float64(src.At(xo, yo).(color.Gray).Y) } else { sum += k[i] * float64(src.At(x, y).(color.Gray).Y) } i++ } } dst.SetGray(x, y, color.Gray{uint8(math.Min(255, math.Max(0, sum)))}) } }
}
var blur = [9]float64{
1. / 9, 1. / 9, 1. / 9, 1. / 9, 1. / 9, 1. / 9, 1. / 9, 1. / 9, 1. / 9}
// blurY example function applies blur kernel to Y channel // of YCbCr image using generic kernel filter function kf3 func blurY(src *image.YCbCr) *image.YCbCr {
dst := *src
// catch zero-size image here if src.Rect.Max.X == src.Rect.Min.X || src.Rect.Max.Y == src.Rect.Min.Y { return &dst }
// pass Y channels as gray images srcGray := image.Gray{src.Y, src.YStride, src.Rect} dstGray := srcGray dstGray.Pix = make([]uint8, len(src.Y)) kf3(&blur, &srcGray, &dstGray) // call generic convolution function
// complete result dst.Y = dstGray.Pix // convolution result dst.Cb = append([]uint8{}, src.Cb...) // Cb, Cr are just copied dst.Cr = append([]uint8{}, src.Cr...) return &dst
}
func main() {
// Example file used here is Lenna100.jpg from the task "Percentage // difference between images" f, err := os.Open("Lenna100.jpg") if err != nil { fmt.Println(err) return } img, err := jpeg.Decode(f) if err != nil { fmt.Println(err) return } f.Close() y, ok := img.(*image.YCbCr) if !ok { fmt.Println("expected color jpeg") return } f, err = os.Create("blur.jpg") if err != nil { fmt.Println(err) return } err = jpeg.Encode(f, blurY(y), &jpeg.Options{90}) if err != nil { fmt.Println(err) }
}</lang> Alternative version, building on code from bitmap task.
New function for raster package: <lang go>package raster
import "math"
func (g *Grmap) KernelFilter3(k []float64) *Grmap {
if len(k) != 9 { return nil } r := NewGrmap(g.cols, g.rows) r.Comments = append([]string{}, g.Comments...) // Filter edge pixels with minimal code. // Execution time per pixel is high but there are few edge pixels // relative to the interior. o3 := [][]int{ {-1, -1}, {0, -1}, {1, -1}, {-1, 0}, {0, 0}, {1, 0}, {-1, 1}, {0, 1}, {1, 1}} edge := func(x, y int) uint16 { var sum float64 for i, o := range o3 { c, ok := g.GetPx(x+o[0], y+o[1]) if !ok { c = g.pxRow[y][x] } sum += float64(c) * k[i] } return uint16(math.Min(math.MaxUint16, math.Max(0,sum))) } for x := 0; x < r.cols; x++ { r.pxRow[0][x] = edge(x, 0) r.pxRow[r.rows-1][x] = edge(x, r.rows-1) } for y := 1; y < r.rows-1; y++ { r.pxRow[y][0] = edge(0, y) r.pxRow[y][r.cols-1] = edge(r.cols-1, y) } if r.rows < 3 || r.cols < 3 { return r }
// Interior pixels can be filtered much more efficiently. otr := -g.cols + 1 obr := g.cols + 1 z := g.cols + 1 c2 := g.cols - 2 for y := 1; y < r.rows-1; y++ { tl := float64(g.pxRow[y-1][0]) tc := float64(g.pxRow[y-1][1]) tr := float64(g.pxRow[y-1][2]) ml := float64(g.pxRow[y][0]) mc := float64(g.pxRow[y][1]) mr := float64(g.pxRow[y][2]) bl := float64(g.pxRow[y+1][0]) bc := float64(g.pxRow[y+1][1]) br := float64(g.pxRow[y+1][2]) for x := 1; ; x++ { r.px[z] = uint16(math.Min(math.MaxUint16, math.Max(0, tl*k[0] + tc*k[1] + tr*k[2] + ml*k[3] + mc*k[4] + mr*k[5] + bl*k[6] + bc*k[7] + br*k[8]))) if x == c2 { break } z++ tl, tc, tr = tc, tr, float64(g.px[z+otr]) ml, mc, mr = mc, mr, float64(g.px[z+1]) bl, bc, br = bc, br, float64(g.px[z+obr]) } z += 3 } return r
}</lang> Demonstration program: <lang go>package main
// Files required to build supporting package raster are found in: // * This task (immediately above) // * Bitmap // * Grayscale image // * Read a PPM file // * Write a PPM file
import (
"fmt" "raster"
)
var blur = []float64{
1./9, 1./9, 1./9, 1./9, 1./9, 1./9, 1./9, 1./9, 1./9}
var sharpen = []float64{
-1, -1, -1, -1, 9, -1, -1, -1, -1}
func main() {
// Example file used here is Lenna100.jpg from the task "Percentage // difference between images" converted with with the command // convert Lenna100.jpg -colorspace gray Lenna100.ppm b, err := raster.ReadPpmFile("Lenna100.ppm") if err != nil { fmt.Println(err) return } g0 := b.Grmap() g1 := g0.KernelFilter3(blur) err = g1.Bitmap().WritePpmFile("blur.ppm") if err != nil { fmt.Println(err) }
}</lang>
J
<lang J>NB. pad the first n dimensions of an array with zeros NB. (increasing all dimensions by 1 less than the kernel size) pad=: adverb define
adj1=: <.m%2 adj2=: m-1 (-@(adj2 + ]) {. (adj1 + ]) {. [) (#m) {. $
)
kernel_filter=: adverb define
[: ,/"(-#$m) ($m) +/@(,/^:(_1+#$m))@:*&m;._3 ($m)pad
)</lang>
This code assumes that the leading dimensions of the array represent pixels and any trailing dimensions represent structure to be preserved (this is a fairly common approach and matches the J implementation at Basic bitmap storage).
Example use:
NB. kernels borrowed from C and TCL implementations sharpen_kernel=: _1+10*4=i.3 3 blur_kernel=: 3 3$%9 emboss_kernel=: _2 _1 0,_1 1 1,:0 1 2 sobel_emboss_kernel=: _1 _2 _1,0,:1 2 1
'blurred.ppm' writeppm~ blur_kernel kernel_filter readppm 'original.ppm'
Java
Code: <lang Java>import java.awt.image.*; import java.io.File; import java.io.IOException; import javax.imageio.*;
public class ImageConvolution {
public static class ArrayData { public final int[] dataArray; public final int width; public final int height; public ArrayData(int width, int height) { this(new int[width * height], width, height); } public ArrayData(int[] dataArray, int width, int height) { this.dataArray = dataArray; this.width = width; this.height = height; } public int get(int x, int y) { return dataArray[y * width + x]; } public void set(int x, int y, int value) { dataArray[y * width + x] = value; } } private static int bound(int value, int endIndex) { if (value < 0) return 0; if (value < endIndex) return value; return endIndex - 1; } public static ArrayData convolute(ArrayData inputData, ArrayData kernel, int kernelDivisor) { int inputWidth = inputData.width; int inputHeight = inputData.height; int kernelWidth = kernel.width; int kernelHeight = kernel.height; if ((kernelWidth <= 0) || ((kernelWidth & 1) != 1)) throw new IllegalArgumentException("Kernel must have odd width"); if ((kernelHeight <= 0) || ((kernelHeight & 1) != 1)) throw new IllegalArgumentException("Kernel must have odd height"); int kernelWidthRadius = kernelWidth >>> 1; int kernelHeightRadius = kernelHeight >>> 1; ArrayData outputData = new ArrayData(inputWidth, inputHeight); for (int i = inputWidth - 1; i >= 0; i--) { for (int j = inputHeight - 1; j >= 0; j--) { double newValue = 0.0; for (int kw = kernelWidth - 1; kw >= 0; kw--) for (int kh = kernelHeight - 1; kh >= 0; kh--) newValue += kernel.get(kw, kh) * inputData.get( bound(i + kw - kernelWidthRadius, inputWidth), bound(j + kh - kernelHeightRadius, inputHeight)); outputData.set(i, j, (int)Math.round(newValue / kernelDivisor)); } } return outputData; } public static ArrayData[] getArrayDatasFromImage(String filename) throws IOException { BufferedImage inputImage = ImageIO.read(new File(filename)); int width = inputImage.getWidth(); int height = inputImage.getHeight(); int[] rgbData = inputImage.getRGB(0, 0, width, height, null, 0, width); ArrayData reds = new ArrayData(width, height); ArrayData greens = new ArrayData(width, height); ArrayData blues = new ArrayData(width, height); for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { int rgbValue = rgbData[y * width + x]; reds.set(x, y, (rgbValue >>> 16) & 0xFF); greens.set(x, y, (rgbValue >>> 8) & 0xFF); blues.set(x, y, rgbValue & 0xFF); } } return new ArrayData[] { reds, greens, blues }; } public static void writeOutputImage(String filename, ArrayData[] redGreenBlue) throws IOException { ArrayData reds = redGreenBlue[0]; ArrayData greens = redGreenBlue[1]; ArrayData blues = redGreenBlue[2]; BufferedImage outputImage = new BufferedImage(reds.width, reds.height, BufferedImage.TYPE_INT_ARGB); for (int y = 0; y < reds.height; y++) { for (int x = 0; x < reds.width; x++) { int red = bound(reds.get(x, y), 256); int green = bound(greens.get(x, y), 256); int blue = bound(blues.get(x, y), 256); outputImage.setRGB(x, y, (red << 16) | (green << 8) | blue | -0x01000000); } } ImageIO.write(outputImage, "PNG", new File(filename)); return; } public static void main(String[] args) throws IOException { int kernelWidth = Integer.parseInt(args[2]); int kernelHeight = Integer.parseInt(args[3]); int kernelDivisor = Integer.parseInt(args[4]); System.out.println("Kernel size: " + kernelWidth + "x" + kernelHeight + ", divisor=" + kernelDivisor); int y = 5; ArrayData kernel = new ArrayData(kernelWidth, kernelHeight); for (int i = 0; i < kernelHeight; i++) { System.out.print("["); for (int j = 0; j < kernelWidth; j++) { kernel.set(j, i, Integer.parseInt(args[y++])); System.out.print(" " + kernel.get(j, i) + " "); } System.out.println("]"); } ArrayData[] dataArrays = getArrayDatasFromImage(args[0]); for (int i = 0; i < dataArrays.length; i++) dataArrays[i] = convolute(dataArrays[i], kernel, kernelDivisor); writeOutputImage(args[1], dataArrays); return; }
}</lang>

Example 5x5 Gaussian blur, using Pentagon.png from the Hough transform task:
java ImageConvolution pentagon.png JavaImageConvolution.png 5 5 273 1 4 7 4 1 4 16 26 16 4 7 26 41 26 7 4 16 26 16 4 1 4 7 4 1 Kernel size: 5x5, divisor=273 [ 1 4 7 4 1 ] [ 4 16 26 16 4 ] [ 7 26 41 26 7 ] [ 4 16 26 16 4 ] [ 1 4 7 4 1 ]
Liberty BASIC
In the following a 128x128 bmp file is loaded and its brightness values are read into an array.
We then convolve it with a 'sharpen' 3x3 matrix. Results are shown directly on screen.
NB Things like convolution would be best done by combining LB with ImageMagick, which is easily called from LB.
<lang lb>
dim result( 300, 300), image( 300, 300), mask( 100, 100) w =128 h =128
nomainwin
WindowWidth = 460 WindowHeight = 210
open "Convolution" for graphics_nsb_nf as #w
#w "trapclose [quit]"
#w "down ; fill darkblue"
hw = hwnd( #w) calldll #user32,"GetDC", hw as ulong, hdc as ulong
loadbmp "img", "alpha25.bmp"' 128x128 pixels #w "drawbmp img 20, 20"
#w "up ; color white ; goto 292 20 ; down ; box 420 148" #w "up ; goto 180 60 ; down ; backcolor darkblue ; color cyan" #w "\"; "Convolved with"
for y =0 to 127 ' fill in the input matrix for x =0 to 127 xx =x + 20 yy =y + 20 CallDLL #gdi32, "GetPixel", hdc as uLong, xx as long, yy as long, pixcol as ulong call getRGB pixcol, b, g, r image( x, y) =b '#w "color "; image( x, y); " 0 "; 255 -image( x, y) '#w "set "; x + 20; " "; y +20 +140 next x next y
#w "flush" print " Input matrix filled."
#w "size 8" for y =0 to 2 ' fill in the mask matrix for x =0 to 2 read mask mask( x, y) =mask if mask = ( 0 -1) then #w "color yellow" else #w "color red" #w "set "; 8 *x +200; " "; 8 *y +80 next x next y data -1,-1,-1,-1,9,-1,-1,-1,-1
#w "flush" print " Mask matrix filled."
#w "size 1" mxx =0: mnn =0
for x =0 to 127 -2 ' since any further overlaps image edge for y =0 to 127 -2 result( x, y) =0 for kx =0 to 2 for ky =0 to 2 result( x, y) =result( x, y) +image( x +kx, y +ky) *mask( kx, ky) next ky if mxx <result( x, y) then mxx =result( x, y) if mnn >result( x, y) then mnn =result( x, y) next kx scan next y next x
range =mxx -mnn for x =0 to 127 -2 for y =0 to 127 -2 c =int( 255 *( result( x, y) -mnn) /range) '#w "color "; c; " "; c; " "; c if c >128 then #w "color white" else #w "color black" #w "set "; x +292 +1; " "; y +20 +1 scan next y next x #w "flush"
wait
sub getRGB pixcol, byref r, byref g, byref b b = int( pixcol / (256 *256)) g = int( ( pixcol - b *256 *256) / 256) r = int( pixcol - b *256 *256 - g *256) end sub
[quit] close #w CallDLL #user32, "ReleaseDC", hw as ulong, hdc as ulong end
</lang>
Screenview is available at [[1]]
Mathematica
Most image processing functions introduced in Mathematica 7 <lang mathematica>img = Import[NotebookDirectory[] <> "Lenna50.jpg"]; kernel = {{0, -1, 0}, {-1, 4, -1}, {0, -1, 0}}; ImageConvolve[img, kernel] ImageConvolve[img, GaussianMatrix[35] ] ImageConvolve[img, BoxMatrix[1] ]</lang>
OCaml
<lang ocaml>let get_rgb img x y =
let _, r_channel,_,_ = img in let width = Bigarray.Array2.dim1 r_channel and height = Bigarray.Array2.dim2 r_channel in if (x < 0) || (x >= width) then (0,0,0) else if (y < 0) || (y >= height) then (0,0,0) else (* feed borders with black *) get_pixel img x y
let convolve_get_value img kernel divisor offset = fun x y ->
let sum_r = ref 0.0 and sum_g = ref 0.0 and sum_b = ref 0.0 in
for i = -1 to 1 do for j = -1 to 1 do let r, g, b = get_rgb img (x+i) (y+j) in sum_r := !sum_r +. kernel.(j+1).(i+1) *. (float r); sum_g := !sum_g +. kernel.(j+1).(i+1) *. (float g); sum_b := !sum_b +. kernel.(j+1).(i+1) *. (float b); done; done; ( !sum_r /. divisor +. offset, !sum_g /. divisor +. offset, !sum_b /. divisor +. offset )
let color_to_int (r,g,b) =
(truncate r, truncate g, truncate b)
let bounded (r,g,b) =
((max 0 (min r 255)), (max 0 (min g 255)), (max 0 (min b 255)))
let convolve_value ~img ~kernel ~divisor ~offset =
let _, r_channel,_,_ = img in let width = Bigarray.Array2.dim1 r_channel and height = Bigarray.Array2.dim2 r_channel in
let res = new_img ~width ~height in
let conv = convolve_get_value img kernel divisor offset in
for y = 0 to pred height do for x = 0 to pred width do let color = conv x y in let color = color_to_int color in put_pixel res (bounded color) x y; done; done; (res)</lang>
<lang ocaml>let emboss img =
let kernel = [| [| -2.; -1.; 0. |]; [| -1.; 1.; 1. |]; [| 0.; 1.; 2. |]; |] in convolve_value ~img ~kernel ~divisor:1.0 ~offset:0.0;
let sharpen img =
let kernel = [| [| -1.; -1.; -1. |]; [| -1.; 9.; -1. |]; [| -1.; -1.; -1. |]; |] in convolve_value ~img ~kernel ~divisor:1.0 ~offset:0.0;
let sobel_emboss img =
let kernel = [| [| -1.; -2.; -1. |]; [| 0.; 0.; 0. |]; [| 1.; 2.; 1. |]; |] in convolve_value ~img ~kernel ~divisor:1.0 ~offset:0.5;
let box_blur img =
let kernel = [| [| 1.; 1.; 1. |]; [| 1.; 1.; 1. |]; [| 1.; 1.; 1. |]; |] in convolve_value ~img ~kernel ~divisor:9.0 ~offset:0.0;
- </lang>
Octave
Use package Image
<lang octave>function [r, g, b] = rgbconv2(a, c)
r = im2uint8(mat2gray(conv2(a(:,:,1), c))); g = im2uint8(mat2gray(conv2(a(:,:,2), c))); b = im2uint8(mat2gray(conv2(a(:,:,3), c)));
endfunction
im = jpgread("Lenna100.jpg"); emboss = [-2, -1, 0; -1, 1, 1; 0, 1, 2 ]; sobel = [-1., -2., -1.; 0., 0., 0.; 1., 2., 1. ]; sharpen = [ -1.0, -1.0, -1.0; -1.0, 9.0, -1.0; -1.0, -1.0, -1.0 ];
[r, g, b] = rgbconv2(im, emboss); jpgwrite("LennaEmboss.jpg", r, g, b, 100); [r, g, b] = rgbconv2(im, sobel); jpgwrite("LennaSobel.jpg", r, g, b, 100); [r, g, b] = rgbconv2(im, sharpen); jpgwrite("LennaSharpen.jpg", r, g, b, 100);</lang>
PicoLisp
<lang PicoLisp>(scl 3)
(de ppmConvolution (Ppm Kernel)
(let (Len (length (car Kernel)) Radius (/ Len 2)) (make (chain (head Radius Ppm)) (for (Y Ppm T (cdr Y)) (NIL (nth Y Len) (chain (tail Radius Y)) ) (link (make (chain (head Radius (get Y (inc Radius)))) (for (X (head Len Y) T) (NIL (nth X 1 Len) (chain (tail Radius (get X (inc Radius)))) ) (link (make (for C 3 (let Val 0 (for K Len (for L Len (inc 'Val (* (get X K L C) (get Kernel K L)) ) ) ) (link (min 255 (max 0 (*/ Val 1.0)))) ) ) ) ) (map pop X) ) ) ) ) ) ) )</lang>
Test using 'ppmRead' from Bitmap/Read a PPM file#PicoLisp and 'ppmWrite' from Bitmap/Write a PPM file#PicoLisp:
# Sharpen (ppmWrite (ppmConvolution (ppmRead "Lenna100.ppm") '((-1.0 -1.0 -1.0) (-1.0 +9.0 -1.0) (-1.0 -1.0 -1.0)) ) "a.ppm" ) # Blur (ppmWrite (ppmConvolution (ppmRead "Lenna100.ppm") '((0.1 0.1 0.1) (0.1 0.1 0.1) (0.1 0.1 0.1)) ) "b.ppm" )
Racket
This example uses typed/racket, since that gives access to inline-build-flomap, which delivers quite a performance boost over build-flomap.
271px-John_Constable_002.jpg convolve-etch-3x3.png
<lang racket>#lang typed/racket
(require images/flomap racket/flonum)
(provide flomap-convolve)
(: perfect-square? (Nonnegative-Fixnum -> (U Nonnegative-Fixnum #f))) (define (perfect-square? n)
(define rt-n (integer-sqrt n)) (and (= n (sqr rt-n)) rt-n))
(: flomap-convolve (flomap FlVector -> flomap)) (define (flomap-convolve F K)
(unless (flomap? F) (error "arg1 not a flowmap")) (unless (flvector? K) (error "arg2 not a flvector")) (define R (perfect-square? (flvector-length K))) (cond [(not (and R (odd? R))) (error "K is not odd-sided square")] [else (define R/2 (quotient R 2)) (define R/-2 (quotient R -2)) (define-values (sz-w sz-h) (flomap-size F)) (define-syntax-rule (convolution c x y i) (if (= 0 c) (flomap-ref F c x y) ; c=3 is alpha channel (for*/fold: : Flonum ((acc : Flonum 0.)) ((k (in-range 0 (add1 R/2))) (l (in-range 0 (add1 R/2))) (kl (in-value (+ (* k R) l))) (kx (in-value (+ x k R/-2))) (ly (in-value (+ y l R/-2))) #:when (< 0 kx (sub1 sz-w)) #:when (< 0 ly (sub1 sz-h))) (+ acc (* (flvector-ref K kl) (flomap-ref F c kx ly)))))) (inline-build-flomap 4 sz-w sz-h convolution)]))
(module* test racket
(require racket/draw images/flomap racket/flonum (only-in 2htdp/image save-image)) (require (submod "..")) (define flmp (bitmap->flomap (read-bitmap "jpg/271px-John_Constable_002.jpg"))) (save-image (flomap->bitmap (flomap-convolve flmp (flvector 1.))) "out/convolve-unit-1x1.png") (save-image (flomap->bitmap (flomap-convolve flmp (flvector 0. 0. 0. 0. 1. 0. 0. 0. 0.))) "out/convolve-unit-3x3.png") (save-image (flomap->bitmap (flomap-convolve flmp (flvector -1. -1. -1. -1. 4. -1. -1. -1. -1.))) "out/convolve-etch-3x3.png"))</lang>
Ruby
<lang ruby>class Pixmap
# Apply a convolution kernel to a whole image def convolute(kernel) newimg = Pixmap.new(@width, @height) pb = ProgressBar.new(@width) if $DEBUG @width.times do |x| @height.times do |y| apply_kernel(x, y, kernel, newimg) end pb.update(x) if $DEBUG end pb.close if $DEBUG newimg end
# Applies a convolution kernel to produce a single pixel in the destination def apply_kernel(x, y, kernel, newimg) x0 = x==0 ? 0 : x-1 y0 = y==0 ? 0 : y-1 x1 = x y1 = y x2 = x+1==@width ? x : x+1 y2 = y+1==@height ? y : y+1 r = g = b = 0.0 [x0, x1, x2].zip(kernel).each do |xx, kcol| [y0, y1, y2].zip(kcol).each do |yy, k| r += k * self[xx,yy].r g += k * self[xx,yy].g b += k * self[xx,yy].b end end newimg[x,y] = RGBColour.new(luma(r), luma(g), luma(b)) end
# Function for clamping values to those that we can use with colors def luma(value) if value < 0 0 elsif value > 255 255 else value end end
end
- Demonstration code using the teapot image from Tk's widget demo
teapot = Pixmap.open('teapot.ppm') [ ['Emboss', [[-2.0, -1.0, 0.0], [-1.0, 1.0, 1.0], [0.0, 1.0, 2.0]]],
['Sharpen', [[-1.0, -1.0, -1.0], [-1.0, 9.0, -1.0], [-1.0, -1.0, -1.0]]], ['Blur', [[0.1111,0.1111,0.1111],[0.1111,0.1111,0.1111],[0.1111,0.1111,0.1111]]],
].each do |label, kernel|
savefile = 'teapot_' + label.downcase + '.ppm' teapot.convolute(kernel).save(savefile)
end</lang>
Tcl
<lang tcl>package require Tk
- Function for clamping values to those that we can use with colors
proc tcl::mathfunc::luma channel {
set channel [expr {round($channel)}] if {$channel < 0} {
return 0
} elseif {$channel > 255} {
return 255
} else {
return $channel
}
}
- Applies a convolution kernel to produce a single pixel in the destination
proc applyKernel {srcImage x y -- kernel -> dstImage} {
set x0 [expr {$x==0 ? 0 : $x-1}] set y0 [expr {$y==0 ? 0 : $y-1}] set x1 $x set y1 $y set x2 [expr {$x+1==[image width $srcImage] ? $x : $x+1}] set y2 [expr {$y+1==[image height $srcImage] ? $y : $y+1}]
set r [set g [set b 0.0]] foreach X [list $x0 $x1 $x2] kcol $kernel {
foreach Y [list $y0 $y1 $y2] k $kcol { lassign [$srcImage get $X $Y] rPix gPix bPix set r [expr {$r + $k * $rPix}] set g [expr {$g + $k * $gPix}] set b [expr {$b + $k * $bPix}] }
}
$dstImage put [format "#%02x%02x%02x" \
[expr {luma($r)}] [expr {luma($g)}] [expr {luma($b)}]]\ -to $x $y }
- Apply a convolution kernel to a whole image
proc convolve {srcImage kernel {dstImage ""}} {
if {$dstImage eq ""} {
set dstImage [image create photo]
} set w [image width $srcImage] set h [image height $srcImage] for {set x 0} {$x < $w} {incr x} {
for {set y 0} {$y < $h} {incr y} { applyKernel $srcImage $x $y -- $kernel -> $dstImage }
} return $dstImage
}
- Demonstration code using the teapot image from Tk's widget demo
image create photo teapot -file $tk_library/demos/images/teapot.ppm pack [labelframe .src -text Source] -side left pack [label .src.l -image teapot] foreach {label kernel} {
Emboss {
{-2. -1. 0.} {-1. 1. 1.} { 0. 1. 2.}
} Sharpen {
{-1. -1. -1} {-1. 9. -1} {-1. -1. -1}
} Blur {
{.1111 .1111 .1111} {.1111 .1111 .1111} {.1111 .1111 .1111}
}
} {
set name [string tolower $label] update pack [labelframe .$name -text $label] -side left pack [label .$name.l -image [convolve teapot $kernel]]
}</lang>