higher dimensional deconvolution
I got interested in higher dimensional deconvolution when contacted by someone about using it as a way of looking for trading indicators in financial time series. I set this task as an example of something that I'm speculating can be done well with functional and array processing languages, but only with difficulty otherwise, which I hope someone will weigh in to confirm or refute. In case anyone wants to know, consistent [test data] were generated partly by this higher dimensional convolution function,
<lang Ursala>conv = +^|(~&x+,*DrlDSNiCK9xxSNiCK9K7iFS+ *)=>times+ **+ *K7|\x+ iota; * ! plus:-0</lang>
(conv d)(h,f) with dimension
d > 0 and conforming
f. (This function essentially subsumes the Image convolution task as a special case with
d = 2 and
|h| = 3.) I suggest a development methodology based on warming up with Deconvolution/1D, then hand coding the solutions for the next few dimensions, and then looking for the pattern.
I am new here, but I think the definition of convolution is wrong and should be
in accordance with [convolution]
--R.E. Boss 08:45, 3 march 2010
My bad. It's fixed now.
How do you help someone who knows how to program, but not how to de/convolve? I would like to attempt a solution but, not knowing Ursala, the example code given is of no help, and the maths only helps those that know rather than being a means to communicate to those outside the field. In Averages/Pythagorean means I was careful to add textual descriptions of the maths to help those people who know how to program, but whose eyes might start to glaze over when confronted with the equations. In fact, the second part of the task: "Show that the ..." was initially below the block of equations, and several implementers missed it completely in their haste to gloss over the equations.
It can't hurt to try. Here is an informal description of an algorithm for it, with preference given to clarity over brevity. I'll record it here rather than in the task specification, because this isn't the only way to solve it or the best.
band function takes a pair of lists representing the
spaces you want to deconvolve. The length of is and the length
of is , and . The
returns a matrix with rows and columns, where the length of the
answer, , is given by
. Assuming all indices are numbered from 0, the entry in the
-th row and the -th column of this matrix is if
, but 0 otherwise. How you write the code to build it is up
to you. It doesn't have to be done with list combinators if the
language is better at handling arrays, but when it's done, the matrix
should look like the coefficient matrix for the system of equations
shown in the Deconvolution/1D task specification.
For the one dimensional case, the rest is easy. Plug this matrix and
the vector into a linear equation solver if the language has one, or
the function from the Reduced row echelon form task otherwise, and
you're done. The only minor complication is that you have to delete
some of the equations if the solver can't cope with having more
equations than unknowns. (In practice, it's better for noise reduction
if it can.) You can delete all but the first from the beginning
and last from the end (or one extra from somewhere if is an
odd number). If the solver precludes over determined systems
and the language also doesn't let you delete rows from matrices, you might
have to write the code in the
band function to build it that way in
the first place.
For the two dimensional case, each member of and is itself a
vector rather than a scalar. Proceed using the same algorithm as in
band function above to construct a matrix of vectors following
the same pattern, with which each -th element of being either a
vector from , or a zero vector of conforming length.
I'm not saying it can't be done, but I'm not sure how pruning the matrix generalizes to higher dimensions, so the rest of this description assumes the solver copes with over determined systems. Such a solver is readily available in the [Lapack] library, which is callable from many languages.
Next, pair up each row of the matrix-of-vectors with the corresponding member of
. Since is a matrix of vectors, each row of it can be considered a matrix
of scalars on its own. For each ranging from 0 to , the pair of the -th member of and the -th row of will consist of a vector and a matrix . Pair up a copy of with each row of and apply the 1 dimensional
band function to each member of the list of all such pairs. This step will yield a list of matrices (of scalars) each with rows. Flatten this list of matrices into a single matrix with rows by concatenating them row-wise (i.e., horizontally). Having done so for all pairs, flatten the resulting list of matrices into one single matrix having rows by concatenating them column-wise (i.e., vertically). This matrix should then be fed to the linear equation solver along with a flattened copy of .
The result returned by the linear equation solver will be a list of scalars, which must be unflattened into a list of vectors in the two dimensional case. Do so using whatever algorithm you prefer, provided that the length of each vector in the result is . (If the length of the result returned by the linear equation solver is not a multiple of this number, you've done it wrong.)
For three dimensions, and will be lists of matrices. Begin as above, but in this case populating the zero entries of with conforming zero matrices rather than vectors or scalars. When you get to the point of operating on each pair, it turns out not to be a vector and a matrix of scalars as in the two dimensional case, but a list of vectors and matrix of vectors. At this point, pair up a copy of with each row of , and treat each of the pairs as an instance of the two dimensional problem. Construct the input matrix to the linear equation solver corresponding to each of these subproblems as explained above, but don't solve it. Instead, flatten them into a single matrix by concatenating them horizontally. Repeat for each , flatten the results vertically as explained above, and feed resulting data to the solver. (The top level will have to be flattened twice for the solver for the three dimensional case.) Take the solution and unflatten it first into sublists of the innermost dimension , and then unflatten the intermediate result further into sublists of length .
A four dimensional problem instance is to three as three is to two, and finishes with three unflattenings. Higher dimensions are analogous. Whether the language has suitable abstraction mechanisms to express a general form for this algorithm is part of what the task is intended to establish.