vspline 1.1.0
Generic C++11 Code for Uniform B-Splines
Classes | Macros | Typedefs | Functions
metafilter3.cc File Reference

implementing a locally adapted filter More...

#include <iostream>
#include <vspline/vspline.h>
#include "vspline/opt/vector_promote.h"
#include "vspline/opt/xel_of_vector.h"
#include <vigra/stdimage.hxx>
#include <vigra/imageinfo.hxx>
#include <vigra/impex.hxx>
#include <vigra/quaternion.hxx>

Go to the source code of this file.

Classes

struct  image_base_type
 
struct  image_type< vsize >
 
struct  meta_filter< I, O, S, EV >
 

Macros

#define XEL_TYPE   vigra::TinyVector
 
#define VECTOR_TYPE   vspline::simd_type
 
#define VECTOR_TYPE   vspline::vc_simd_type
 

Typedefs

typedef vigra::RGBValue< float, 0, 1, 2 > pixel_type
 
typedef vigra::TinyVector< float, 2 > coordinate_type
 
typedef vigra::TinyVector< float, 3 > ray_type
 
typedef vspline::bspline< pixel_type, 2 > spline_type
 
typedef vigra::MultiArray< 2, pixel_typetarget_type
 
typedef vigra::TinyVector< float, 3 > coefficient_type
 
typedef vigra::MultiArray< 2, coefficient_typekernel_type
 

Functions

template<typename dtype >
vigra::Quaternion< dtypeget_rotation_q (dtype yaw, dtype pitch)
 
template<typename dtype , typename qtype >
vigra::TinyVector< dtype, 3 > rotate_q (const vigra::TinyVector< dtype, 3 > &v, vigra::Quaternion< qtype > q)
 apply the rotation codified in a quaternion to a 3D point More...
 
void build_kernel (const vigra::MultiArrayView< 2, float > &kernel_2d, const image_base_type &img, kernel_type &meta_kernel)
 
void scale_kernel (kernel_type &k, double sx, double sy)
 
int main (int argc, char *argv[])
 

Detailed Description

implementing a locally adapted filter

taking the method introduced in metafilter.cc one step further, this file uses a filter which varies with the locus of it's application. Since the loci of the pickup points, relative to the filter's center, are decoupled from the filter weights, we can 'reshape' the filter by manipulating them. This program adapts the filter so that it is applied to a perspective-corrected view of the image - or, to express it differently (and this is what happens technically) - the filter is reshaped with the locus of it's application. So the filter is applied to 'what was seen' rather than 'what the image holds'. This is a subtle difference, which is not easy to spot - you can provoke a clearly visible result by specifying a large hfov, and also by applying the program to images with sharp contrasts and single off-coloured pixels; with ordinary photographs and 'normal' viewing angles you'll notice a blur increasing towards the edges of the image. In a wide-angle shot, small circular objects near the edges (especially the corners) of the image appear like ellipses. Filtering the image with a normal convolution with a symmetric filter (like the binomial we're using here) will result in the ellipses being surrounded by a blurred halo which has the same width everywhere. Using the filter implemented here, we see an image of a blurred circular object: the blurred halo is wider in radial direction. And this is precisely what 'should' be seen: Filtering the image with a static filter is a simplification which produces results that look reasonably convincing given a reasonable field of view, but to 'properly' model some change in viewing conditions (like haze in the atmosphere) we need to model what was seen. While this distinction may look like hair-splitting when it comes to photography, it may be more convincing when you consider feature detection. If you have a filter detecting circular patterns, the filter's response to the elliptical shapes occurring in a wide-angle shot near the margins will be sub-optimal: the feature detector will not respond maximally, because, after all, it's input is an ellipse and not a circle. But if you use the technique I implement in this program, the detector will adapt to the place it 'looks at' and detect representations of circular shapes in the image with proper full response. In fact, such a detector will react with lessened response if the image, near the border, shows circular structures, because these clearly can't have originated from circular objects in the view. This program works with rectilinear input, but the implications are even greater for, say, full spherical panoramas. In such images, there is intense distortion 'near the poles', and using ordinary filters on such images does not produce truly correct effects. With filters which adapt to the locus of their application, such images can be filtered adequately. A word of warning is in order here: you still have to obey the sampling theorem and make sure that the pickup points ar not further than half the minimal wave length captured in the image (expressed as an angle in spherical coordinates). Otherwise you will get aliasing, which may produce visible artifacts. If in doubt, use a larger kernel and scale it down. 'Kernel' size can be adapted freely with the technique proposed here. To see the filter 'at work' process an image with a few isolated black dots on white background and inspect the output, which shows a different result, depending on the distance from the image's center, rather than a uniform impulse response everywhere. All of this sounds like a lot of CPU cycles to be gotten through, but in fact due to vspline's use of SIMD and multi-threading it only takes a fraction of a second for a full HD image. In this variant of the program, we add a few 'nice-to-have' features which are easy to implement with the given tool set: we add the option to scale the filter - we can do this steplessly because the positions of the pickup points are decoupled from the grid. We also use a larger kernel (9X9) which we can use over a wider range of scales - if we scale it so that it 'covers' just about a single pixel in the center, it will be larger towards the image margins, the increase depending on the FOV. If we use it on scaled versions of the image, we can scale it so that it covers an equally sized patch (e.g. x arch seconds in diameter) on every scale. We could be tempted to use it unscaled on the image with the most detail, but we have to keep in mind that in the corners of the image, this will put the pickup points further apart than one unit step, which will only produce correct results if excessively high frequencies are absent from the spectrum in these parts of the image. This suggests that using metafilters can be useful for work with image pyramids: given two adjacent layers of the pyramid L and M, where M is a scaled-down version of L with a scaling factor of l (in 'conventional' pyramids, l = 1/2), we can choose a metakernel so that it will not be too 'scattered' when projected to the edges of L (so as to avoid aliasing) and scale the metakernel with l when we apply it to M. Then we expect similar filter response from both cases at corresponding loci - we have killed two birds with one stone: the filter response is independent of scale and reflects 'what is looked at' rather than it's projected image. With the free scalability of the metafiler, we can use other l than 0.5 which allows 'steeper' and 'shallower' pyramids (ref. to spline pyramids) with the equivalence of applying two differently-scaled versions of a metafilter to differently-scaled versions of an image, we can also 'extrapolate' filters: rather than applying a metafilter with a large number of coefficients to a detailed image, we can apply a smaller metafilter to a scaled-down version of the image. (note that this puts constraints on the filter, it should be akin to a gaussian).

Definition in file metafilter3.cc.

Macro Definition Documentation

◆ VECTOR_TYPE [1/2]

#define VECTOR_TYPE   vspline::simd_type

Definition at line 168 of file metafilter3.cc.

◆ VECTOR_TYPE [2/2]

#define VECTOR_TYPE   vspline::vc_simd_type

Definition at line 168 of file metafilter3.cc.

◆ XEL_TYPE

#define XEL_TYPE   vigra::TinyVector

Definition at line 159 of file metafilter3.cc.

Typedef Documentation

◆ coefficient_type

typedef vigra::TinyVector< float , 3 > coefficient_type

Definition at line 215 of file metafilter3.cc.

◆ coordinate_type

typedef vigra::TinyVector< float , 2 > coordinate_type

Definition at line 197 of file metafilter3.cc.

◆ kernel_type

typedef vigra::MultiArray< 2 , coefficient_type > kernel_type

Definition at line 219 of file metafilter3.cc.

◆ pixel_type

typedef vigra::RGBValue< float , 0 , 1 , 2 > pixel_type

Definition at line 193 of file metafilter3.cc.

◆ ray_type

typedef vigra::TinyVector< float , 3 > ray_type

Definition at line 201 of file metafilter3.cc.

◆ spline_type

Definition at line 205 of file metafilter3.cc.

◆ target_type

typedef vigra::MultiArray< 2 , pixel_type > target_type

Definition at line 209 of file metafilter3.cc.

Function Documentation

◆ build_kernel()

void build_kernel ( const vigra::MultiArrayView< 2, float > &  kernel_2d,
const image_base_type img,
kernel_type meta_kernel 
)

Definition at line 540 of file metafilter3.cc.

◆ get_rotation_q()

template<typename dtype >
vigra::Quaternion< dtype > get_rotation_q ( dtype  yaw,
dtype  pitch 
)

Definition at line 363 of file metafilter3.cc.

◆ main()

int main ( int  argc,
char *  argv[] 
)

Definition at line 578 of file metafilter3.cc.

◆ rotate_q()

template<typename dtype , typename qtype >
vigra::TinyVector< dtype, 3 > rotate_q ( const vigra::TinyVector< dtype, 3 > &  v,
vigra::Quaternion< qtype >  q 
)

apply the rotation codified in a quaternion to a 3D point

Definition at line 391 of file metafilter3.cc.

◆ scale_kernel()

void scale_kernel ( kernel_type k,
double  sx,
double  sy 
)

Definition at line 572 of file metafilter3.cc.