vspline 1.1.0
Generic C++11 Code for Uniform B-Splines
metafilter3.cc
Go to the documentation of this file.
1/************************************************************************/
2/* */
3/* vspline - a set of generic tools for creation and evaluation */
4/* of uniform b-splines */
5/* */
6/* Copyright 2015 - 2023 by Kay F. Jahnke */
7/* */
8/* Permission is hereby granted, free of charge, to any person */
9/* obtaining a copy of this software and associated documentation */
10/* files (the "Software"), to deal in the Software without */
11/* restriction, including without limitation the rights to use, */
12/* copy, modify, merge, publish, distribute, sublicense, and/or */
13/* sell copies of the Software, and to permit persons to whom the */
14/* Software is furnished to do so, subject to the following */
15/* conditions: */
16/* */
17/* The above copyright notice and this permission notice shall be */
18/* included in all copies or substantial portions of the */
19/* Software. */
20/* */
21/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND */
22/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES */
23/* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND */
24/* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT */
25/* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, */
26/* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING */
27/* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR */
28/* OTHER DEALINGS IN THE SOFTWARE. */
29/* */
30/************************************************************************/
31
32/// \file metafilter3.cc
33///
34/// \brief implementing a locally adapted filter
35///
36/// taking the method introduced in metafilter.cc one step further,
37/// this file uses a filter which varies with the locus of it's
38/// application. Since the loci of the pickup points, relative to the
39/// filter's center, are decoupled from the filter weights, we can
40/// 'reshape' the filter by manipulating them. This program adapts the
41/// filter so that it is applied to a perspective-corrected view of
42/// the image - or, to express it differently (and this is what happens
43/// technically) - the filter is reshaped with the locus of it's application.
44/// So the filter is applied to 'what was seen' rather than
45/// 'what the image holds'. This is a subtle difference, which is not
46/// easy to spot - you can provoke a clearly visible result by specifying
47/// a large hfov, and also by applying the program to images with sharp
48/// contrasts and single off-coloured pixels; with ordinary photographs
49/// and 'normal' viewing angles you'll notice a blur increasing towards
50/// the edges of the image. In a wide-angle shot, small circular
51/// objects near the edges (especially the corners) of the image appear
52/// like ellipses. Filtering the image with a normal convolution with
53/// a symmetric filter (like the binomial we're using here) will result
54/// in the ellipses being surrounded by a blurred halo which has the same
55/// width everywhere. Using the filter implemented here, we see an image
56/// of a blurred circular object: the blurred halo is wider in radial
57/// direction. And this is precisely what 'should' be seen: Filtering
58/// the image with a static filter is a simplification which produces
59/// results that look reasonably convincing given a reasonable field
60/// of view, but to 'properly' model some change in viewing conditions
61/// (like haze in the atmosphere) we need to model what was seen.
62/// While this distinction may look like hair-splitting when it comes to
63/// photography, it may be more convincing when you consider feature
64/// detection. If you have a filter detecting circular patterns, the
65/// filter's response to the elliptical shapes occurring in a wide-angle
66/// shot near the margins will be sub-optimal: the feature detector will
67/// not respond maximally, because, after all, it's input is an ellipse
68/// and not a circle. But if you use the technique I implement in this
69/// program, the detector will adapt to the place it 'looks at' and detect
70/// representations of circular shapes in the image with proper full
71/// response. In fact, such a detector will react with lessened response if
72/// the image, near the border, shows circular structures, because these
73/// clearly can't have originated from circular objects in the view.
74/// This program works with rectilinear input, but the implications are
75/// even greater for, say, full spherical panoramas. In such images, there
76/// is intense distortion 'near the poles', and using ordinary filters on
77/// such images does not produce truly correct effects. With filters which
78/// adapt to the locus of their application, such images can be filtered
79/// adequately.
80/// A word of warning is in order here: you still have to obey the
81/// sampling theorem and make sure that the pickup points ar not further
82/// than half the minimal wave length captured in the image (expressed as
83/// an angle in spherical coordinates). Otherwise you will get aliasing,
84/// which may produce visible artifacts. If in doubt, use a larger kernel
85/// and scale it down. 'Kernel' size can be adapted freely with the technique
86/// proposed here. To see the filter 'at work' process an image with a few
87/// isolated black dots on white background and inspect the output, which
88/// shows a different result, depending on the distance from the image's
89/// center, rather than a uniform impulse response everywhere.
90/// All of this sounds like a lot of CPU cycles to be gotten through,
91/// but in fact due to vspline's use of SIMD and multi-threading it only
92/// takes a fraction of a second for a full HD image.
93/// In this variant of the program, we add a few 'nice-to-have' features
94/// which are easy to implement with the given tool set: we add the option
95/// to scale the filter - we can do this steplessly because the positions
96/// of the pickup points are decoupled from the grid. We also use a larger
97/// kernel (9X9) which we can use over a wider range of scales - if we
98/// scale it so that it 'covers' just about a single pixel in the center,
99/// it will be larger towards the image margins, the increase depending on
100/// the FOV. If we use it on scaled versions of the image, we can scale it
101/// so that it covers an equally sized patch (e.g. x arch seconds in
102/// diameter) on every scale. We could be tempted to use it unscaled on
103/// the image with the most detail, but we have to keep in mind that in
104/// the corners of the image, this will put the pickup points further
105/// apart than one unit step, which will only produce correct results
106/// if excessively high frequencies are absent from the spectrum in these
107/// parts of the image.
108/// This suggests that using metafilters can be useful for work with
109/// image pyramids: given two adjacent layers of the pyramid L and M,
110/// where M is a scaled-down version of L with a scaling factor of
111/// l (in 'conventional' pyramids, l = 1/2), we can choose a metakernel
112/// so that it will not be too 'scattered' when projected to the edges
113/// of L (so as to avoid aliasing) and scale the metakernel with l
114/// when we apply it to M. Then we expect similar filter response from
115/// both cases at corresponding loci - we have killed two birds with
116/// one stone: the filter response is independent of scale and reflects
117/// 'what is looked at' rather than it's projected image. With the free
118/// scalability of the metafiler, we can use other l than 0.5 which
119/// allows 'steeper' and 'shallower' pyramids (ref. to spline pyramids)
120/// with the equivalence of applying two differently-scaled versions of
121/// a metafilter to differently-scaled versions of an image, we can
122/// also 'extrapolate' filters: rather than applying a metafilter with
123/// a large number of coefficients to a detailed image, we can apply
124/// a smaller metafilter to a scaled-down version of the image. (note
125/// that this puts constraints on the filter, it should be akin to a
126/// gaussian).
127
128// We use the following strategy:
129// codify the image's metrics in an object which provides methods to
130// convert an image coordinate into spherical coordinates and ray
131// coordinates into image coordinates.
132// Then convert each batch of incoming 2D image coordinates to spherical
133// coordinates. The spherical coordinates are interpreted as a rotation,
134// and this rotation is applied to each of the kernel coefficients' loci
135// in turn, yielding 'pickup coordinates' where the image is evaluated.
136// The evaluations for all coefficients are weighted and summed up,
137// yielding the filter response, which is stored in the target image.
138// So the mathematics are quite involved: For each image point, we
139// have to calculate the rotational quaternion, apply it to all the
140// kernel coefficients, evaluate at these locations, and form the weighted
141// sum. The SIMD code used here does that with batches of 16 points at a
142// time, which is a great speed-up.
143// Note how this strategy is SIMD-friendly, whereas following the concept
144// of 'reorient a ray bundle to each source coordinate' is not so easily
145// calculated with SIMD arithmetic. The outcome of both schemes is the
146// same, but changing the sequence around allows us to go 'in SIMD'
147// from a SIMD vector of target coordinates to a SIMD vector of pickup
148// coordinates to a series of SIMD vectors of weighted partial results
149// corresponding to kernel coefficients, and finally, when this series
150// is summed up, to a SIMD vector of result pixels.
151
152#include <iostream>
153
154#include <vspline/vspline.h>
155
156// we want to use some optional headers from vspline, which make our
157// lives easier, allowing more concise notation.
158
159#define XEL_TYPE vigra::TinyVector
160
161#define VECTOR_TYPE vspline::simd_type
162#include "vspline/opt/vector_promote.h"
163#include "vspline/opt/xel_of_vector.h"
164#undef VECTOR_TYPE
165
166#ifdef USE_VC
167
168#define VECTOR_TYPE vspline::vc_simd_type
169#include "vspline/opt/vector_promote.h"
170#include "vspline/opt/xel_of_vector.h"
171#undef VECTOR_TYPE
172
173#endif
174
175#ifdef USE_HWY
176
177#define VECTOR_TYPE vspline::hwy_simd_type
178#include "vspline/opt/vector_promote.h"
179#include "vspline/opt/xel_of_vector.h"
180#undef VECTOR_TYPE
181
182#endif
183
184#undef XEL_TYPE
185
186#include <vigra/stdimage.hxx>
187#include <vigra/imageinfo.hxx>
188#include <vigra/impex.hxx>
189#include <vigra/quaternion.hxx>
190
191// we silently assume we have a colour image
192
193typedef vigra::RGBValue < float , 0 , 1 , 2 > pixel_type ;
194
195// coordinate_type has a 2D coordinate
196
197typedef vigra::TinyVector < float , 2 > coordinate_type ;
198
199// ray_type has a 3D coordinate
200
201typedef vigra::TinyVector < float , 3 > ray_type ;
202
203// type of b-spline object
204
206
207// target_type is a 2D array of pixels
208
209typedef vigra::MultiArray < 2 , pixel_type > target_type ;
210
211// filter coefficients are TinyVectors of three values:
212// the horizontal and vertical distance from the filter's
213// center, and the corresponding weight.
214
215typedef vigra::TinyVector < float , 3 > coefficient_type ;
216
217// kernel_type is a 2D MultiArray of coefficients.
218
219typedef vigra::MultiArray < 2 , coefficient_type > kernel_type ;
220
221// image_base_type has the basic metrics of an image both in pixel
222// units and model space units.
223
225{
226 // metrics of the image in pixel units
227
228 size_t px_width ;
229 size_t px_height ;
230 double hfov ;
231
232 // corresponding width and height values in unit sphere radii
233 // to clarify: the rectilinear source image is 'mounted' in model
234 // space at one unit distance from model center, and it's horizontal
235 // extent in model space, u_width, is equal to the twice the tangens
236 // of half the horizontal field of view. u_height is cacluclated
237 // from u_width via the aspect ratio. px_per_u and u_per_px are
238 // simply linear scaling factors.
239
240 double u_width ;
241 double u_height ;
242
243 // ratio of pixel units to unit sphere radii (u) and inverse
244
245 double px_per_u ;
246 double u_per_px ;
247
248 image_base_type ( const size_t & _px_width ,
249 const size_t & _px_height ,
250 const double & _hfov )
251 : px_width ( _px_width ) ,
252 px_height ( _px_height ) ,
253 hfov ( _hfov )
254 {
255 assert ( hfov >= 0.0 ) ;
256 u_width = 2.0 * tan ( hfov / 2.0 ) ;
260 }
261} ;
262
263// image_type provides conversion from 2D rectilinear image coordinates
264// to spherical coordinates, and from 3D cartesian to 2D rectilinear.
265// It uses two vspline::domains to encode the shift and scale operation,
266// and the reprojection.
267
268template < std::size_t vsize = vspline::vector_traits < float > :: size >
270{
274
275private:
276
277 // scale_to_model transforms image coordinates to model space coordinates.
278 // This is a simple scale-and-shift operation which is implemented with
279 // a vspline::domain_type. scale_to_image performs the inverse operation.
280
281 image_type ( const image_base_type & _ib )
282 : image_base ( _ib ) ,
283 scale_to_model ( { -.5f ,
284 -.5f } ,
285 { float ( _ib.px_width - .5 ) ,
286 float ( _ib.px_height - 0.5 ) } ,
287 { float ( - _ib.u_width / 2.0 ) ,
288 float ( - _ib.u_height / 2.0 ) } ,
289 { float ( _ib.u_width / 2.0 ) ,
290 float ( _ib.u_height / 2.0 ) } ) ,
291 scale_to_image ( { float ( - _ib.u_width / 2.0 ) ,
292 float ( - _ib.u_height / 2.0 ) } ,
293 { float ( _ib.u_width / 2.0 ) ,
294 float ( _ib.u_height / 2.0 ) } ,
295 { -.5f ,
296 -.5f } ,
297 { float ( _ib.px_width - .5 ) ,
298 float ( _ib.px_height - 0.5 ) } )
299 { }
300
301public:
302
303 // public c'tor taking image width, height and hfov
304
305 image_type ( const size_t & _px_width ,
306 const size_t & _px_height ,
307 const double & _hfov )
308 : image_type ( image_base_type ( _px_width ,
309 _px_height ,
310 _hfov ) )
311 { }
312
313 // to_spherical calculates azimuth and elevation from a 2D image coordinate
314
315 template < typename dtype >
316 vigra::TinyVector < dtype , 2 > to_spherical
317 ( const vigra::TinyVector < dtype , 2 > & ci ) const
318 {
319 // cm will hold the coordinate in model space
320
321 vigra::TinyVector < dtype , 2 > cm ;
322
323 // transform the incoming image coordinate to model space
324
325 scale_to_model.eval ( ci , cm ) ;
326
327 // find azimuth and elevation
328
329 dtype azimuth = atan ( cm[0] ) ;
330
331 // distance from origin to point projected to horozontal plane
332
333 dtype fl = sqrt ( 1.0 + cm[0] * cm[0] ) ;
334
335 dtype elevation = - atan ( cm[1] / fl ) ;
336
337 return { azimuth , elevation } ;
338 }
339
340 // to_image intersects a ray with the image plane, yielding 2D rectilinear.
341
342 template < typename dtype >
343 vigra::TinyVector < dtype , 2 > to_image
344 ( const vigra::TinyVector < dtype , 3 > & cm3 ) const
345 {
346 vigra::TinyVector < dtype , 2 > cm { cm3[1] / cm3[0] , cm3[2] / cm3[0] } ;
347
348 scale_to_image.eval ( cm , cm ) ;
349
350 return cm ;
351 }
352
353} ;
354
355// adapted quaternion code taken from lux, omitting the roll component rotation
356// which is not used in this program. Note how the inclusion of the headers
357// from vspline's opt section introduced traits code for vigra which allows
358// us to create vigra::Quaternions with simdized dtype. With this addition
359// we can formulate our pixel pipeline in a type-agnostic way so that it
360// functions with both scalar and SIMD data.
361
362template < typename dtype >
363vigra::Quaternion < dtype > get_rotation_q ( dtype yaw , dtype pitch )
364{
365 // first calculate the component quaternions.
366 // for a right-handed system and clockwise rotations, we have this formula:
367 // q = cos ( phi / 2 ) + ( ux * i + uy * j + uz * k ) * sin ( phi / 2 )
368 // where phi is the angle of rotation and (ux, uy, uz) is the axis around
369 // which we want to rotate some 3D object.
370
371 // if we consider a target in the line of sight, we take pitch as moving it upwards,
372 // rotating clockwise around the positive y axis (0 , 1, 0)
373 vigra::Quaternion < dtype > q_pitch ( cos ( pitch / 2.0 ) , 0.0 , sin ( pitch / 2.0 ) , 0.0 ) ;
374
375 // if we consider a target in the line of sight, we take yaw as moving it to the right,
376 // which is counterclockwise around the z axis, or clockwise around the negative
377 // z axis (0, 0, -1)
378 vigra::Quaternion < dtype > q_yaw ( cos ( -yaw / 2.0 ) , 0.0 , 0.0 , - sin ( -yaw / 2.0 ) ) ;
379
380 // now produce the concatenated operation by multiplication of the components
381 vigra::Quaternion < dtype > total = q_yaw ;
382 total *= q_pitch ;
383
384 return total ;
385}
386
387/// apply the rotation codified in a quaternion to a 3D point
388
389template < typename dtype , typename qtype >
390vigra::TinyVector < dtype , 3 >
391rotate_q ( const vigra::TinyVector < dtype , 3 > & v ,
392 vigra::Quaternion < qtype > q )
393{
394 vigra::TinyVector < dtype , 3 > result ;
395 vigra::Quaternion < dtype > vq ( 0.0 , v[0] , v[1] , v[2] ) ;
396 vigra::Quaternion < dtype > qq ( q[0] , q[1] , q[2] , q[3] ) ;
397 vigra::Quaternion < dtype > qc = conj ( qq ) ;
398 qq *= vq ;
399 qq *= qc ;
400 result[0] = qq[1] ;
401 result[1] = qq[2] ;
402 result[2] = qq[3] ;
403 return result ;
404}
405
406// the filter is built by functor composition. EV is the
407// type of the interpolator producing the pickup values.
408// The kernel's coefficients are expected to be scaled to
409// their final magnitude, and usually the weight components
410// will have been normalized.
411// The difference to a normal kernel is that the coefficients'
412// loci are not necessarily on a grid, and if they are on a grid,
413// they needn't be unit-spaced. If you think of the kernel as a
414// set of weighted points on a plane tangent to the unit sphere
415// in forward direction, each coefficient's locus defines a ray.
416// The filter's output is generated by moving the bundle of rays
417// so that their center coincides with the incoming coordinate,
418// and the interpolator is evaluated where each of the rays
419// hit the image surface. The evaluations are weighted and summed
420// up to yield the result.
421// With this design, we gain a lot of flexibility: we can scale the
422// kernel up and down and use unconventional coefficient patterns.
423// And the filter's response is of 'what was looked at' rather than
424// of 'what is in the image'. It does in effect have inherent
425// perspective correction. So it's a good base for feature detectors,
426// which should not be fed 'warped' data for optimal response.
427// Note that the funny-sounding comments which mix singular and plural
428// try and reflect the fact that the code can process single coordinates
429// as well as 'simdized' coordinates.
430
431template < typename I , typename O , size_t S ,
432 template < typename , typename , size_t > class EV >
433struct meta_filter
434: public vspline::unary_functor < I , O , S >
435{
437 typedef EV < I , O , S > inner_type ;
438
442
443 meta_filter ( const inner_type & _inner ,
444 const kernel_type & _kernel ,
445 const coordinate_type & _extent ,
446 const float & _hfov )
447 : inner ( _inner ) ,
448 kernel ( _kernel ) ,
449 img ( _extent[0] , _extent[1] , _hfov )
450 { } ;
451
452 // since the code is the same for vectorized and unvectorized
453 // operation, we can write an 'eval' template which can be used
454 // both for single-value evaluation and vectorized evaluation.
455 // incoming coordinate 'c' is a 2D image coordinate or it's
456 // SIMD equivalent, a vigra::TinyVector of two SIMD vectors.
457
458 template < class IN , class OUT >
459 void eval ( const IN & c ,
460 OUT & result ) const
461 {
462 // elementary type of IN, may be a scalar or a vector
463
464 typedef typename IN::value_type ele_type ;
465
466 // clear 'result'
467
468 result = 0.0f ;
469
470 // convert the incoming coordinate(s) to spherical
471
472 auto cc = img.to_spherical ( c ) ;
473
474 // - calculate a quaternion containing two rotations: the azimuth
475 // as pitch and the elevation as yaw
476
477 auto q = get_rotation_q ( cc[0] , cc[1] ) ;
478
479 // iterate over the coefficients
480
481 for ( auto const & cf : kernel )
482 {
483 // cf now holds three values: cf[0] and cf[1] hold the distance
484 // of the kernel coefficient from the kernel's center - these two
485 // values would be implicit in a 'normal' kernel and result from
486 // the coefficient's place in the kernel, but here we allow
487 // arbitrary values for maximal flexibility. cf[2] holds the
488 // kernel coefficient's value (the 'weight'), which is the only
489 // value a normal kernel would hold for each coefficient.
490 // Note that the values in cf are not SIMD vectors but simple
491 // scalars. We use references for clarity:
492
493 const auto & dx ( cf[0] ) ;
494 const auto & dy ( cf[1] ) ;
495 const auto & weight ( cf[2] ) ;
496
497 // cc has spherical coordinates of the current target pixel(s).
498 // we have to perform these steps:
499
500 // - apply quaternion q to (1, dx, dy) , which yields the pickup
501 // coordinate(s). We need a 3D point, so initially we position it
502 // one unit 'in front': VVVV
503
504 vigra::TinyVector < ele_type , 3 > p3d { 1.0f , dx , dy } ;
505
506 // now we apply the quaternion, receiving (a) 3D 'ray' coordinate(s)
507
508 p3d = rotate_q ( p3d , q ) ;
509
510 // now convert p3d back to image coordinates to yield the
511 // pickup coordinate(s) in 2D image coordinates.
512
513 auto ci = img.to_image ( p3d ) ;
514
515 // there, evaluate the interpolator to yield the pickup value(s)
516
517 OUT pickup ; // to hold partial result(s) from pickup point(s)
518
519 inner.eval ( ci , pickup ) ;
520
521 // apply the current weight to the pickup value(s) and sum up,
522 // yielding the filter output(s)
523
524 result += pickup * weight ;
525 }
526 }
527
528} ;
529
530// for starters, we provide building a kernel from an ordinary 2D kernel
531// with equidistant coefficients, as it would be used for a convolution.
532// The loci of the coefficients are calculated by referring to 'img';
533// they are given in model space units. Here, their distance is picked
534// so that they coincide with the distance of two pixels at the image's
535// center. Note how this kernel is for demonstration purposes and
536// deliberately simple. It does not really exploit the flexibility which
537// the free positioning of the coefficients relative to the center would
538// allow.
539
540void build_kernel ( const vigra::MultiArrayView < 2 , float > & kernel_2d ,
541 const image_base_type & img ,
542 kernel_type & meta_kernel )
543{
544 // we get the 2D kernel's extent
545
546 auto kernel_shape = kernel_2d.shape() ;
547
548 coordinate_type kernel_center ( kernel_shape[0] - 1 , kernel_shape[1] - 1 ) ;
549 kernel_center /= 2 ;
550
551 assert ( kernel_shape == meta_kernel.shape() ) ;
552
553 for ( int x = 0 ; x < kernel_shape[0] ; x++ )
554 {
555 for ( int y = 0 ; y < kernel_shape[1] ; y++ )
556 {
557 float dx = img.u_per_px * ( x - kernel_center [ 0 ] ) ;
558 float dy = img.u_per_px * ( y - kernel_center [ 1 ] ) ;
559 float weight = kernel_2d [ { x , y } ] ;
560
561 meta_kernel [ { x , y } ] = { dx , dy , weight } ;
562 }
563 }
564 // meta_kernel [ { 0 , 0 } ] [ 2 ] = 1.0 ;
565}
566
567// to scale the kernel, we simply multiply the 'locus' parts of
568// the coefficients with a factor. Note how we can exploit vigra's
569// convenient capability of multiplying an entire array of T
570// (namely k) with one T (cf in this case):
571
572void scale_kernel ( kernel_type & k , double sx , double sy )
573{
574 coefficient_type cf { float ( sx ) , float ( sy ) , 1.0f } ;
575 k *= cf ;
576}
577
578int main ( int argc , char * argv[] )
579{
580 if ( argc < 5 )
581 {
582 std::cerr << "pass a colour image file as first argument" << std::endl ;
583 std::cerr << "and the hfov as second argument, followed by" << std::endl ;
584 std::cerr << "the horizontal and vertical kernel scaling factor" << std::endl ;
585 exit( -1 ) ;
586 }
587
588 float hfov = std::atof ( argv[2] ) * M_PI / 180.0 ; // 90 degree in radians
589 float sx = std::atof ( argv[3] ) ; // horizontal kernel scaling factor
590 float sy = std::atof ( argv[4] ) ; // vertical kernel scaling factor
591
592 // get the image file name
593
594 vigra::ImageImportInfo imageInfo ( argv[1] ) ;
595
596 image_base_type img ( imageInfo.width() , imageInfo.height() , hfov ) ;
597
598 // create cubic 2D b-spline object containing the image data
599 // using a cubic b-spline here is quite expensive - after all, for every
600 // pickup point this spline has to be evaluated. If the kernel is large,
601 // this is overkill, and even a degree-1 spline (bilinear interpolation)
602 // yields 'good enough' results.
603
604 spline_type bspl ( imageInfo.shape() ) ;
605
606 // load the image data into the b-spline's core.
607
608 vigra::importImage ( imageInfo , bspl.core ) ;
609
610 // prefilter the b-spline
611
612 bspl.prefilter() ;
613
614 // create a 'safe' evaluator from the b-spline, so we needn't worry
615 // about out-of-range access; the default safe evaluator uses mirror
616 // boundary conditions and maps out-of-range coordinates into the range
617 // by mirroring on the bounds
618
619 auto ev = vspline::make_safe_evaluator ( bspl ) ;
620
621 // now we set up the meta filter. Here we use a sampling of a quadratic
622 // b-spline kernel with a sub-unit-spaced 9*9 grid.
623
625
626 vigra::MultiArray < 2 , float > kernel_2d ( vigra::Shape2 ( 9 , 9 ) ) ;
627 double sum = 0.0 ;
628
629 for ( int x = -4 ; x < 5 ; x++ )
630 {
631 for ( int y = -4 ; y < 5 ; y++ )
632 {
633 double kx = bf ( 0.25 * x ) ;
634 double ky = bf ( 0.25 * y ) ;
635 double k = kx * ky ;
636 std::cout << " " << k ;
637 kernel_2d [ { x + 4 , y + 4 } ] = k ;
638 sum += k ;
639 }
640 std::cout << std::endl ;
641 }
642
643 // normalize to 1.0
644
645 kernel_2d /= sum ;
646
647 kernel_type meta_kernel ( vigra::Shape2 ( 9 , 9 ) ) ;
648
649 build_kernel ( kernel_2d , img , meta_kernel ) ;
650 scale_kernel ( meta_kernel , sx , sy ) ;
651
652 // now we create the actual meta filter function from the
653 // interpolator and the filter data. Note that a large hfov is
654 // needed to actually see that the result of the filter looks
655 // different in the center and towards the edges and especially
656 // the corners.
657
658 auto mev = meta_filter < coordinate_type ,
659 pixel_type ,
662 ( ev , meta_kernel , imageInfo.shape() , hfov ) ;
663
664 // this is where the result should go:
665
666 target_type target ( imageInfo.shape() ) ;
667
668 // finally, we trigger the pixel pipeline by passing the functor 'mev'
669 // and the to-be-filled target array to vspline::transform, which
670 // takes care of 'wielding' or 'rolling out' the functor, filling
671 // the target array with result data.
672
673 vspline::transform ( mev , target ) ;
674
675 // store the result with vigra impex
676
677 vigra::ImageExportInfo eximageInfo ( "metafilter3.tif" );
678
679 std::cout << "storing the target image as 'metafilter3.tif'" << std::endl ;
680
681 vigra::exportImage ( target ,
682 eximageInfo
683 .setPixelType ( "UINT16" )
684 .setForcedRangeMapping ( 0 , 255 , 0 , 65535 ) ) ;
685
686 exit ( 0 ) ;
687}
vigra::TinyVector< float, 2 > coordinate_type
Definition: ca_correct.cc:107
vigra::MultiArray< 2, pixel_type > target_type
Definition: ca_correct.cc:115
double dtype
Definition: eval.cc:93
vigra::TinyVector< float, 3 > ray_type
Definition: metafilter3.cc:201
int main(int argc, char *argv[])
Definition: metafilter3.cc:578
vigra::TinyVector< dtype, 3 > rotate_q(const vigra::TinyVector< dtype, 3 > &v, vigra::Quaternion< qtype > q)
apply the rotation codified in a quaternion to a 3D point
Definition: metafilter3.cc:391
void build_kernel(const vigra::MultiArrayView< 2, float > &kernel_2d, const image_base_type &img, kernel_type &meta_kernel)
Definition: metafilter3.cc:540
void scale_kernel(kernel_type &k, double sx, double sy)
Definition: metafilter3.cc:572
vigra::Quaternion< dtype > get_rotation_q(dtype yaw, dtype pitch)
Definition: metafilter3.cc:363
vigra::MultiArray< 2, coefficient_type > kernel_type
Definition: metafilter3.cc:219
vigra::TinyVector< float, 2 > coordinate_type
Definition: metafilter3.cc:197
vigra::TinyVector< float, 3 > coefficient_type
Definition: metafilter3.cc:215
vspline::bspline< pixel_type, 2 > spline_type
Definition: metafilter3.cc:205
vigra::RGBValue< float, 0, 1, 2 > pixel_type
Definition: metafilter3.cc:193
vigra::MultiArray< 2, pixel_type > target_type
Definition: metafilter3.cc:209
void transform(const unary_functor_type &functor, const vigra::MultiArrayView< dimension, typename unary_functor_type::in_type > &input, vigra::MultiArrayView< dimension, typename unary_functor_type::out_type > &output, int njobs=vspline::default_njobs, vspline::atomic< bool > *p_cancel=0)
implementation of two-array transform using wielding::coupled_wield.
Definition: transform.h:211
vspline::grok_type< bspl_coordinate_type< spline_type, rc_type >, result_type, _vsize > make_safe_evaluator(const spline_type &bspl, vigra::TinyVector< int, spline_type::dimension > dspec=vigra::TinyVector< int, spline_type::dimension >(0), int shift=0)
make_safe_evaluator is a factory function, producing a functor which provides safe access to an evalu...
Definition: eval.h:2390
image_base_type(const size_t &_px_width, const size_t &_px_height, const double &_hfov)
Definition: metafilter3.cc:248
const vspline::domain_type< coordinate_type, vsize > scale_to_model
Definition: metafilter3.cc:271
const image_base_type image_base
Definition: metafilter3.cc:273
image_type(const size_t &_px_width, const size_t &_px_height, const double &_hfov)
Definition: metafilter3.cc:305
vigra::TinyVector< dtype, 2 > to_spherical(const vigra::TinyVector< dtype, 2 > &ci) const
Definition: metafilter3.cc:317
const vspline::domain_type< coordinate_type, vsize > scale_to_image
Definition: metafilter3.cc:272
vigra::TinyVector< dtype, 2 > to_image(const vigra::TinyVector< dtype, 3 > &cm3) const
Definition: metafilter3.cc:344
const image_type< S > img
Definition: metafilter3.cc:441
vigra::MultiArrayView< 1, float > weight
Definition: metafilter.cc:93
meta_filter(const inner_type &_inner, const kernel_type &_kernel, const coordinate_type &_extent, const float &_hfov)
Definition: metafilter3.cc:443
const inner_type & inner
Definition: metafilter3.cc:439
EV< I, O, S > inner_type
Definition: metafilter3.cc:437
vspline::unary_functor< I, O, S > base_type
Definition: metafilter3.cc:436
void eval(const IN &c, OUT &result) const
Definition: metafilter3.cc:459
const kernel_type & kernel
Definition: metafilter3.cc:440
basis_functor is an object producing the b-spline basis function value for given arguments,...
Definition: basis.h:402
class bspline now builds on class bspline_base, adding coefficient storage, while bspline_base provid...
Definition: bspline.h:499
view_type core
Definition: bspline.h:566
void prefilter(vspline::xlf_type boost=vspline::xlf_type(1), int njobs=vspline::default_njobs)
prefilter converts the knot point data in the 'core' area into b-spline coefficients....
Definition: bspline.h:815
void eval(const in_type &in, out_type &out) const
eval for unvectorized data. we dispatch on in_type, the incoming coordinate, being a 'naked' fundamen...
Definition: domain.h:273
class grok_type is a helper class wrapping a vspline::unary_functor so that it's type becomes opaque ...
class unary_functor provides a functor object which offers a system of types for concrete unary funct...
with the definition of 'simd_traits', we can proceed to implement 'vector_traits': struct vector_trai...
Definition: vector.h:344
includes all headers from vspline (most of them indirectly)