Lux Options - a Dictionary Approach

This is a manual text describing the command line options of lux, the FOSS image and panorama viewer. It's work in progress, I'll add to it as I find time, at times simply copying text from the README, but also writing new text to provide more in-depth views on the options and the effects they have. Progress is in alphabetic sequence.

This manual takes a different approach to the descriptions given in the README: it's structured like a dictionary, with one entry per command line option, and text which puts the option in perspective, extending beyond mere technicality where this is instructive. Lux is extremely configurable, and the sheer number of options is daunting for a new user, even though lux can be run without having to actually use any of the options: lux has heuristics for most settings which will try and provide a 'good' value. My hope is that going through the options one by one will shed light on the underlying design and enable users to use lux to better effect - and also serve as an entry point for interested developers to 'dive into' the lux code base. The ordering is by long option name.

Command line syntax and use of long options in 'lux ini files' is explained in the invocation chapter in the README. In short, you can pass long arguments in 'standard' UNIX fashion like

lux --option value ...

or use lux-specific 'argument assignment' syntax like

lux --option=value ...

In the documentation I usually give the latter form. In lux ini files, issue one assignment per line and omit the two minus signs. Each long argument must be passed a value.


This overview lists valid long lux options as hyperlinks pointing to the entry for that option, followed by an indication of possible values in acute braces, and the default value in round braces. 'Normal' options follow 'last-one-wins' semantics: if they are passed more than once, their last occurrence will 'win'. 'List' or 'vector' options are pointed out by 'adds value to a list'. They follow 'add-to-list' semantics, meaning that they add to a list of values, mostly pertaining to corresponding facets in a facet map. Here, every occurrence of the option counts, and their sequence determines the sequence of values in the list.


This is a boolean flag, which can be set to yes or no, and defaults to yes. There is a short option '-P' which is synonymous with '--allow_pan_mode=no'. Allowing or disallowing pan mode can also be done at run time: F8 toggles the state.

Don't confuse this option with autopan, which can launch lux in automatic pan mode if you pass --autopan=yes. allow_pan_mode only affects the rendering process, it has no visible effect.

'pan mode' is an optimization in the rendering process: for some source image projections, a pan in strictly horizontal direction can be rendered efficiently by adding to the horizontal component of a set of precalculated coordinates. 'pan mode' can be used for spherical and cylindric images. If allow_pan_mode is set to 'yes' for other projections, it won't have an effect: it's an opt-in option, choosing specialized code if that is possible. So if you allow pan mode for a fisheye image, this won't do anything, but it's not rejected as an error. This is a common method in lux: it will do stuff if it can, but if it can only do the next best thing, it will do that instead - rather than failing. Not using pan mode will, at worst, be slower than using it, the result is the same.

Why only these two projections? There are two reasons: first, the projection has to be suitable. pan mode works by precalculating the 'pickup' coordinates for each target image pixel and adding a small increment to all x coordinates for each successive frame. This only works for mosaic, spherical and cylindric source images, for other projections, panning will modify the y coordinate. So why not mosaic projection? Because keeping, accessing and modifying the coordinate array costs resources, and only comes out 'on top' if calculation of the coordinates is expensive. For mosaic images, this is not the case, whereas spherical and cylindric images need transcendental functions to calculate pickup coordinates, which is slow - and using precalculated coordinates is faster.

You can easily see the speedup from using pan mode by running a common lux benchmark: a thousand-frame pan. Assuming you have an image 'image.jpg', do this (using short options for brevity):

lux -ps -h360 -z1000 -A.05 --allow_pan_mode=yes image.jpg

lux -ps -h360 -z1000 -A.05 --allow_pan_mode=no image.jpg

Just to explain the short options: -ps sets spherical projection, -h360 sets horizontal field of view to 360, -z1000 tells lux to stop after 1000 frames and -A.05 sets the autopan speed.

Compare the average frame rendering time which lux displays at the end of the program. Here on my system, using pan mode is about 30% faster.

From the results you can see that for 'normal operation' disallowing pan mode slows lux down or has no effect. So why make it an option? Here's a typical example of lux' philosophy of 'if it's easy to be made configurable, make it configurable'. As a developer, I might test different rendering functions, or try the code on different target systems. I can't be sure that what I think of as an optimization will in fact perform better. If I have the option to switch certain behaviour on and off, I can test my assumption quickly without having to compile two different versions. And I can never second-guess the system lux will be made to run on: my choice of a 'good' solution may be wrong at some specific site. Having the option to switch things on or off easily can mitigate a bad choice based on heuristics.

For the technically-minded, I'd like to add that lux actually detects whether panning is strictly horizontal or not, irrespective of the user activity triggering the pan: it may be due to autopan, but pressing the right/left arrow or using the GUI ('CAM LEFT' or 'CAM RIGHT') will also be recognized as a strictly horizontal pan. The distinction is made in the rendering code, which tests successive frame rendering jobs for viability.


This is a 'one-of' choice, accepting 'yes', 'no', 'as-file' and 'auto' as values, and defaulting to 'auto'.

This option sets alpha channel processing. Lux is designed to be able to do all of it's operations with with or without alpha channel. But it goes beyond that by allowing to ignore an existing alpha channel, or add a fully opaque one to images which have none in the first place. The two obvious values for this option are 'yes' and 'no', which unconditionally switch alpha processing on or off. passing 'as-file' will switch alpha processing on if the image has an alpha channel, and off if it doesn't. Finally there is 'auto'. This seems to be just what 'as-file' does, but there's a subtle difference: if an image is found to have an alpha channel which is fully opaque, the alpha channel is ignored. This is a time-saver, because alpha channel processing costs both memory and CPU cycles for each rendered frame, whereas the test of the alpha channel has to be done only once, when the image is loaded, and is relatively fast. So 'auto' is the default, but yet again, the setting is left configurable, because the test of the alpha channel - if it's there at all - is clearly futile if you know it's not fully opaque.

While the above is true for single-image operation, when it comes to facet maps, lux isn't as flexible as one might wish: alpha mode is decided for the whole set of facet images. When 'auto' or 'as-file' are passed, lux will use the first image to decide for either 'yes' or 'no', and treat the rest in the same way.

Switching alpha processing off may have surprising effects: it will uncover image content which was made 'invisible' by means of a fully transparent alpha channel. Switching alpha processing off makes lux ignore the alpha channel, rather than apply it to the image data and display the result. If you need that behaviour, there is no option in lux to do it - you'll have to go via a snapshot: open the image without passing an option, the do a 'source-like' snapshot by pressing Shift+E to an image format that knows no transparency.

TODO: consider adding an option to apply, then ignore the alpha channel


This is a boolean flag, which can be set to yes or no, and defaults to yes.

I like panning over panoramic images, starting from the left margin. So the default in lux is to position the view just so that the panoramic image's left margin coincides with the left margin of the viewer's window, and the pitch is adapted so that the viewer window's vertical center coincides with the panoramic image's. Then, all I need to to is hit 'Space' and lux will autopan over the image. Now that's a matter of taste, and my preference to move from left to right may be due to cultural conditioning... be that as it may, it's the way lux does it. If you switch auto_position off, lux will 'land' you in the center of the panoramic image.


This is a boolean flag, which can be set to yes or no, and defaults to no.

It might also be called auto_animation_quality, because it only affects rendering of 'animated sequences', like zooms, pans, rotations etc - where many frames are rendered in quick succession to create the illusion of movement. Rendering many frames quickly enough is not always possible, and it's even hard to predict how a specific animated sequence will perform. But lux has the ability to modify it's rendering quality based on the rendering time, and this flag can be used to switch the behaviour on - by default it's off unless the target platform is a mac. The lowest rendering quality which this option can produce is 10% (area-relative) - lower values can only be reached 'manually'.

Lux lowers rendering time by, internally, rendering smaller frames, which are left to the GPU to be upscaled: the GPU isn't used much by lux at all, so this is not a performance issue; usually the bottleneck is CPU performance, because lux does all the rendering work on the CPU. The blown-up small frames may look blurred - of course depending on the amount of downscaling. When auto_quality is on, lux looks at the time needed to render frames, and adapts frame size to make the frame rendering time fit into the given rendering time budget.

The mechanism is not perfect, it has the tendency to be too conservative, lowering rendering quality too aggressively, so that the animated sequences sure render smoothly, but look too blurred. If you have the GUI on, you can see the strength of the effect given in percent. With auto_quality off, you can set this value and it will stay there - with auto_quality on, you may set it, but lux will not stick to it and adjust it if needed. Toggle this setting with the 'G' key.

Using auto_quality will not totally prevent dropped frames and the 'image stutter' that goes along with dropped frames, because it can't be too 'twitchy' and needs to be half-way sure that regulation is needed. Any interaction with the viewer will make lux 'reconsider', but once it's come to a specific value for 'animation quality', it will only continue regulating the value in the same direction to avoid 'pumping'. Only user interaction 'frees' the direction, so if you get stuck one way, just interact with lux, to make it 'reconsider'.

Some stutter isn't even due to insufficient rendering capacity - there are other issues involved like memory bus traffic (animated sequences move a lot of memory) and your windowing system, which may, for example, have difficulties rendering 60 fps to a window, even when rendering them full-screen is fine. This is surprising - after all the frames for windowed display are smaller than for full-screen display - but sharing the screen with other applications takes extra work, and I often see a little stutter on my system when rendering to a window. Another reason for dropped frames is background tasks, like the rendering of an image fusion or panorama when idle-time processing is active. Ideally, such long-running background jobs should be aborted as soon as animated sequences are inprogress, but the granularity of the code isn't (yet) quite fine enough.


This is a numeric option taking floating point values, and defaults to 0. You can also use the short option -A.

If you pass a nonzero value, lux will start in auto-pan mode, panning to the right for positive values and to the left for negative ones. A 'leasurely pace' is around 0.05. Having this as a command line option is rather for scripted sessions than for 'normal' interactive use - you can, for example, set up a simple presentation with a setting like

lux -z1000 -A.05 panorama.*

Which will show a 1000-frame pan over all images passed - which may of course need some tweaking to get the number of frames and panning speed right, and isn't perfect insofar as that, if the user interacts, the next image is still displayed after 1000 rendered frames. But it's cool for digital signage, where you use lux to feed a set of panoramas to be displayed on some public monitor or such. when starting lux 'running', you may want to add -q1 or -f2 -q2, to avoid the bit of stutter happening due to the background work load to calculate the 'high quality' interpolators.


sets the blending mode: one of "auto" , "ranked" , "hdr" , "quorate".

This is a wide topic. The blending mode defines how lux displays a 'facet map', a synoptic view composed of several images. So for single-image displays, this flag is irrelevant. And for facet maps, you'll rarely have to pass it explicitly, because lux analyzes the facet map and picks the blending mode automatically, which will usually be just right, and is what happens when you pass the default, 'auto'.

If you want to control the blending mode, you use this option. I'll explain the possible modes in turn, going into quite some detail to clarify the ins and outs of the various modes, and their relation to exposure fusion and focus stacks.

ranked blending

This mode is used for panoramas, where several 'partial' images show different parts of a larger whole. So the variation is in image position, and you need information about the image positions to get a panoramic display. This information is called 'image registration', and you do it with software like hugin, usually by creating 'control points' and using an optimization process to figure out how to best 'place' the images, resulting in a file describing the positions. hugin uses PTO format - PTO stands for PanoTools Optimizer, and is a 'classic' for the purpose. It's a bit hard to read, but easy to parse, and because it's so common, lux 'understands' a subset of it. Lux has it's own file type to store image registration (the lux 'ini' file, usually with extension '.lux', used for all other configuration tasks as well), but for panoramas and brackets, PTO is probably better, because it's what the other programs produce and you don't have to first convert to lux' format.

If you have a set of images, and the registration information in a PTO or a lux ini file, and if it is a panorama, you use --blending=ranked. What does 'ranked' stand for? This takes a bit of explaining lux' ways of producing a synoptic view. To see how lux' way is different, we need to first look at how stitching is normally done.

Traditionally, image stitching is a two-step process: the partial images are first geometrically transformed into the desired target projection, and then the transformed images are put together to form a blended final image. The blending stage gets to see 2D images which overlap in parts, and it tries to figure out 'seams', which define the borders of the parts of every partial which will make it into the final image, and along which 'feathering', may be applied to hide the transition from one image to another, or which defines the outline of 'blending masks' used for multi-level blending. Seam generation is a complex process, trying to avoid putting the seam where it would be prominent - and this can be computationally expensive, because it needs to compare several alternatives and settle on a 'good' one.

Lux takes a different approach: first it 'drapes' the partial images in 'model space'. You can imagine this process like placing the images in space around you, so that if you look in a specific direction, the partial image 'hangs' just so that it looks like that part of the scene which was visible in that direction when the image was taken. Imagine the images as insubstantial, so that they can freely intersect. From the center, your point of view, you can imagine 'rays' going outwards, and hitting one or more of the 'draped' images around you. If a ray hits just one image, there is no ambiguity: pick from the image where the ray hits it, that's your result for that ray. But if there are several partials, you have to decide how to produce your result. This is where lux' blending mode comes in: 'ranked' blending assigns a rank to every intersection of a ray and the partial images, and selects the intersection which ranks 'best' - normally the one which is closest to you - the center. If you use this method strictly, a hard, linear boundary will occur where two images intersect in model space, and you can often see this in lux when lux calculates a view quickly, like in animated sequences: strict ranked blending is quite fast to compute. lux can also apply feathering to hide the hard boundary, but it does no do any seam optimization: the boundary itself, feathered or not, is always where the images intersect in 'model space'.

Seam optimization can hide imperfections in the set of partial images, like parallactic errors due to camera movement or inconsistencies between images due to movement in the scene, and such flaws will be more prominent in lux because there is nothing to mitigate them. Luckily, this is mainly a problem in 'live stitching' - the fast, near-immediate composition of a synoptic image used for animated sequences and for the initial view which is displayed right when the view comes to rest. After a short while, lux replaces this display with a composite view made with 'image splining', where boundaries become much less visible and the 'ranking' is no longer applied to the partial images, but to the blending masks fed into the modified Burt and Adelson image splining algorithm which lux employs. So 'strictly ranked' blending can be seen as a way to help render composite views quickly when necessary, to help you get a 'live stitch' which is ready in a few milliseconds and may even render quickly enough to produce a fluid animation, which you could not expect in a process which first creates 'warped' partials, then does seam optimization and finally blends the images: all of that, put together, is way too slow. Finally, seam optimization can only do so much: the geometry of the overlap may make it impossible to find any seam which has no hard discontinuities. That's why 'traditionally' stitched panoramas may still have 'stitching errors', where even seam optimization could not hide imperfections in the set of partial images. Seam optimization is just that: it optimizes the seam, but what the actual optimum can be depends on the input, and 'optimal' only means that it's the best solution that could be found under the circumstances. If, on the other hand, the set of partials fits well (proper registration, no parallactic errors) and there are no inconsistencies (static scene, no 'ghosts'), the seam may be put just about anywhere - seam optimization is not a big issue then, and putting it 'where the images intersect in model space' is just one of several valid choices. It does have advantages, though: it tends to pick those sections of the partials which are near their respective centers, which is usually where image quality is best (least lens distortion, best light). And since there is no seam optimization process, the process can't fail - enblend does at time produce crypric fatal errors when seam placement fails, and you won't get errors of this type in lux.

There is another aspect to lux' way of image blending. enblend works in a step-by-step fashion: it starts out with a single image and blends it with the next image, then takes the result and the next image in line and blends them, etc.. lux, on the other hand, looks at all contributing partial images 'at once'. Then it figures out how much and how each partial image should contribute and produces layers encoding these contributions. Finally it adds up all these layers, yielding the result. enblend's way of blending obviously depends on the sequence in which it gets to 'see' the partial images, and it will even reject images which cover target area which is already covered by it's previous cumulated result. lux gives all partial images the same opportunity to contribute to the final result, and if no other blending rules tell lux differently, you'll get to see each partial image's center plus some surrounding area - the voronoi cell, or 'facet' in lux parlance.

Summing up, we see that lux' lack of seam optimization does look like a larger problem than it turns out to be, and the panoramas stitched with lux are oftentimes very hard to tell apart from panoramas stitched 'traditionally'. Luckily, it's not an either/or decision: if lux stitches don't work for you, you can just use any other stitcher to produce a panoramic image from your partials and then only view the result in lux.

If the view of a synoptic image done with ranked blending is at rest, lux will calculate a 'properly' blended view of the scene, using it's modified Burt & Adelson image splining algorithm. This takes a little while, and what you see is that the hard boundaries between the partials seem to 'disappear' when the blended view is ready. If you find this behaviour annoying, you can switch it off, see snap_to_stitch - or press F12 to togle the mode.

Lux has a 'fast route' to stitch a panorama into a single output image with one of it's 'compound options'. See stitch.

lux can now also do panoramas with 'stacks'. Stacks are sets of images which have been taken with similar camera orientation, but varying exposure, and the 'stacked' images are typically exposure-fused or HDR-merged and subsequently processed as partial images for the panorama stitching process. If your panorama has stacks, lux will - by default - exposure-fuse them and then stitch a panorama from the fused stacks. You need to use --blending=ranked for the purpose - lux will set this automatically if it recognizes a panorama, but if you only have a single stack, lux will not consider this a panorama. If you prefer to have the stacks HDR-merged instead, pass --stack=hdr (see stack for more on the topic), and you can even choose in-between processing by assigning hdr_spread.

Note that lux will honour stack assignments in the PTO file unconditionally, while stitches done with the standard hugin workflow may override stack assignments with a heuristic. This may lead to different results. Note also that hugin may be set up to force all stack members to the same orientation, and you may have to manually 'unlink' the stacked images to get a correct registration.

While showing an animated view, lux will only display 'stack parents' to save time - fusing the stacks would take too long and force frame rates down to unacceptable levels. This can be annoying at times when the panorama has masks, because some masking will only become apparent when the stacks are in fact fused, which will make for a discrepancy between the 'live view' and the still image which 'kicks in' after some time when the viewer is at rest. You may want to refer to use_pto_masks, which has more to say about masking.

hdr blending

ranked blending, in a nutshell, makes a decision which partial image is best suited to yield information for a given ray, and picks information from the ray's intersection with that particular partial image. In contrast, 'hdr' blending in lux looks at all intersections of a ray with the partials and forms a weighted sum. This is mainly used for HDR blending, and that's where this value for the blending argument originated, even though this mode is now also used for focus stacks - more about that later.

To do this sort of blending, the partial images should be pretty much 'on top of each other', but in reality we rarely get them to fit perfectly, and we still need to go through image registration before we can 'drape' the partials in model space. Because all images will contribute, and oftentimes the result will be composed of several partials 'blended' together, ill-fitting images will be much more of a problem than with ranked blending.

Let' start with HDR blending proper. Here, we have several images taken with different exposure, and we want a weighted sum so that, where the scene was bright, more information is taken from partials with short exposure times (to avoid overexposure), and where the scene was dark, more information is taken from partials with long exposure (to avoid noise and quantization errors). Where the scene was 'somewhere in between', we often have a choice of partials to pick from, so we want to take most of the information from the partial which has brightness values near the middle of the range, where quality is usually best. But what we do not want is to make an either/or decision - if one does, one ends up with 'banding', showing hard boundaries where the switch is made from one image to another. Hence the weighted sum, with a 'smooth' weighting function (a Gaussian in lux), where every partial contributes. If several partials look like good candidates, we form something like the average, if one candidate is much better than the others, we use it pretty much exclusively, but we need a region 'in between' where the decision is not so clear-cut. So this first step in HDR blending figures out what weight to give every intersection of a ray and a partial image.

The next step is very important to understand: even though we calculate the weights for the weighted sum from the partials as they are, what we actually sum up are modified partials, which have been brightened/darkened to some average level. In an ideal world, these modified partials would all look the same, but in photography, they don't: the dark exposures, when brightened, will show noise and banding in it's darker parts, but cover a wide dynamic range, and be good for bright parts where the other shots may be overexposed. The medium to bright exposures will usually show bits which were overexposed, and even if these bits are darkened, we can't recover the overexposed bits. But the weighting takes care of suppressing such 'bad' information, so the final weighted sum has several good properties:

  • bright areas are less likely to be overexposed, because we have high-weight content from brightened dark exposures
  • dark areas are less likely to show noise or banding, because we have high-weight content from darkened bright exposures
  • we have an extended dynamic range, because we have intensity values ranging from zero up to the brightened-up brightest pixel from the darkest exposure, which may be several times as bright as the maximum in the original partials.

After the weighted sum is formed, we have 'proper' HDR information, with a dynamic range which depends on the combined dynamic range of the input images. lux can save such image information to openEXR image files.

Lux has a 'fast route' to HDR-merge into a single output image with one of it's 'compound options'. See hdr_merge. Note that this workflow currently uses a pixel-based approach (the same approach as it is used for animated sequences), but lux can now also HDR-merge with pyramid blending, passing --hdr_spread=1. The result will look different, because the pyramid blending code fuses the partials in sRGB space, whereas the 'standard' HDR-merging code uses linear RGB. There is now a new compund option 'hdr_fuse' which uses pyramid blending and hdr_spread=1 to produce an hdr-merged output with the alternative method.

How lux displays HDR content

So, after the process has processed all rays we're interested in (namely, those in the current view), we have a high dynamic range image. When lux displays HDR information (quickly), it simply picks the middle section of the dynamic range and displays it, which looks pretty much like a 'middle' exposure, with the same blown highlights, but less noise and banding in the dark areas. This is what you get to see during animated sequences, because, again, it's fast to compute. If you take a snapshot of an HDR composite to a format which supports HDR (like openEXR), the whole dynamic range is preserved. And if you use the brightness adjustment in lux, what looks blown initially can be 'brought in range' by darkening the view, at the expense of the dark areas going very dark - just as if you were taking a normal photo of the scene. For navigation and quality control, this kind of display is good enough, but to show HDR information 'squeezed into' the limited dynamic range of a monitor, you need one of three things: dynamic range compression, tonemapping, or exposure fusion.

Lux can do (gentle and fast) dynamic range compression (press F9). The result may or may not be acceptable - it may be worthwhile to quickly get the highlights 'in range', but it's quite limited in it's capabilites. Lux does not do tonemapping, which is yet another wide topic beyond the scope of this manual, and hard to get right: oftentimes the result comes out with a typical 'HDR' look with unnatural colours or strange contrasts. What lux does instead is exposure fusion. This blends several partial images with about the same weighting as for HDR blending, but composes the output from the unmodified partials - so the bright exposures are not darkened and the dark exposures ar not brightened. One might attempt to do that ray-by-ray, as in HDR blending, but the results are disappointing, as can be seen in the literature. What is needed is, again, the Burt & Adelson image splining algorithm, with blending masks generated from the brightness-derived weights rather than spatial criteria.

Because it does compress the dynamic range but usually produces a 'natural' look, exposure fusion is ideal for displaying HDR content - but there is one big drawback: it's slow to compute. So, as for ranked blending, lux will only produce this 'expensive' high-quality view once the view is at rest, and it will take some extra time to do so - just like the view to a 'ranked' rendition needs some time until the 'properly' blended image is presented. This is noticeable and can be quite annoying, because the 'fast' rendition and the exposure fusion often differ widely in brightness. If the automatic switch to an exposure-fused view for still images annoys you, you can switch the behaviour off, see snap_to_fusion. You can also toggle snap-to-hq with F12.

Lux has a 'fast route' to produce an exposure fusion with one of it's 'compound options'. see fuse. Note how this fast-lane process differs from the one used by hdr_merge, which exports 'proper' HDR data. fuse exports LDR data generated by exposure fusion. Also see hdr_fuse.

A close relative to 'genuine' exposure fusion is the creation of 'faux brackets': there, you start out with a single HDR image, produce several differently-exposed LDR images from it, and then exposure-fuse these to produce the final result. See faux_bracket for more on the topic.

lux now offers 'continuosly variable dynamic range' for exposure fusions, where you can use a single parameter hdr_spread with a value from zero to one, zero producing a 'standard' exposure fusion, one an HDR merge, and values in between a 'compromise' between the two modes. This parameter affects all exposure fusions done with lux' modified Burt and Adelson image splining algorithm, it's default is zero, so by default a 'standard' exposure fusion is created. If you render to an HDR image format like openEXR, hdr_spread allows you to tune precisely just 'how HDR' the outcome will be. If you render to an LDR format, hdr_spread affects how much of the dynamic range will 'make it' into the LDR output and how much will be clipped due to overexposure.

focus stacks

We have considered HDR blending, which calculates weights based on brightness, and exposure fusion, which uses similar weighting, but different mathematics. Nevertheless, exposure fusion is based on a weighting scheme, which is based on the analysis of pixel brightness over the image. HDR blending uses this information to blend all pixels on a ray, and exposure fusion blends image pyramids based on weighting pyramids, which in turn are based on pixel brightness. But we might as well use a different scheme to provide the weights, and there are many ways of doing that. lux handles one more specific weighting scheme, which is quite different from pixel brightness: local contrast. Usually local contrast is calculated by convolution with a small kernel, but lux uses the first derivative of a b-spline, which has the desirable property of being continuous, whereas a convolution is only defined for on-grid loci.

Using contrast weighting is done by specifying contrast_weight on the command line, which usually goes along with setting exposure_weight to zero, even though both metrics can be combined. It's only implemented for 'fused' output - both to screen and to images on disk - and not for 'live' displays used during animated sequences, so you only get to see it when the viewer is at rest or when you produce fused output.

Even though this mode of producing fused output is quite different from HDR blending, it needs blending mode 'hdr' - more for 'historic' reasons.

TODO: make an attempt to do focus stacking with per-pixel mathematics, avoiding the costly B&A algorithm. The literature hints that this may be feasible without creating unwanted artifacts, and I think this is quite likely because the images don't differ in overall brightness.

Lux has a 'fast route' to produce a focus stack with one of it's 'compound options'. see focus_stack.

quorate blending

This is a third and independent blending mode, which is used for deghosting. It's also pixel-based and it will work for animated sequences and still images alike, but for animated sequences it's quite slow and animations will not be fluid.

This mode compares all ray-image intersections and tries to identify outliers, values which differ from the majority. You need at least three partial images to mark one as outlier and the other two as 'majority'. Quorate blending suppresses outliers and shows an average of the 'majority'. This is quite effective at deghosting, when the 'ghost' is like a foreign object in an otherwise static scene - like a bird or a car passing through the set.

Lux has a 'fast route' to produce a deghosted image with one of it's 'compound options'. see deghost.


What does 'bls' stand for? It's short for 'blending settings' - the 'bls' prefix is used for several parameters in lux. These parameters do affect the inner workings of the image blending code. They are provided to explore the new possibilities offered by my reimplementation of the Burt & Adelson image splining algorithm.

This option sets the decimation method used to get a pyramid level from the level 'below' it. It's akin to the parameter 'pyramid_smoothing_level' used for the image pyramids which lux employs for rendering down-scaled on-screen displays, and uses the same range of values. But, like the other bls... options, it pertains to the use of image pyramids in lux' modified Burt & Adelson image splining algorithm only.

My advice for this parameter is currently to use only decimators with all-positive kernels. This excludes the half-band filters. These filters have some negative kernel values which can lead to ringing artifacts. So all positive values, and negative values down to -4 should be fine, apart from -1, which stands for area decimation and also doesn't always work well for all parameter constellations - I'm not entirely sure about area decimation in this context, it may turn out to be safe after all, which would be great because it's so fast.

The value set for decimators has grown over time and it's a bit confusing, so here's the story:

Initially, lux image pyramids used a downscaling scheme where raw image data were 'abused' as b-spline coefficients for a b-spline with 'large-ish' degree. This technique is formalized in vspline and named 'shifting' the spline; the result for shifting this way is a low-pass-filtered signal, which is precisely what's needed for downscaling. Downscaling is 'conventionally' thought of as a two-step process of first low-pass-filtering the signal, then decimating it by dropping, like, every other sample. lux downscaling is not grid-bound and the low-pass is inherent in the evaluation of the shifted spline, so it can lump the two processes together and directly produce the arbitrarily downscaled output by evaluating the shifted spline at the desired loci. To use this downscaling process (shifted b-splines) you pass a positive value to bls_i_spline_decimator, representing the desired spline degree. A sensible value would be seven to go with a scaling step of two, but even up to twelve is 'reasonable' - it depends on just how much of a low-pass one wants in this slot. To put this another way: when using a shifted spline, the low-pass is like convolving with a kernel consisting of unit-spaced samples of the b-spline basis function of the given degree. The transfer function is smooth, there is no ripple at all, but the attenuation of high frequencies is not very pronounced (unless the degree is 'quite high') and the transition band is quite wide. So I implemented other methods which are more efficient for the given purpose:

While the method outlined above works well, computing it is quite expensive. Quite recently, I extended vspline to widen the scope of basis functions from the 'pure' b-spline basis functions to arbitrary locus-driven basis functions, which enabled me to implement other decimation schemes. Two of these schemes are available in lux, and they are introduced by passing negative values to bls_i_spline_decimator.

area decimation

To use area decimation for downscaling, pass -1. This method is very fast and has a few desirable mathematical properties. This worked well for my initial implementation of the B&A algorithm, but with my switch to aRGBA code it did at times produce (slight) artifacts - now the default is -2, a small binomial - see below.

To describe area decimation - as used by lux - graphically, imagine the source image as composed of square pixels having a specific area, rather than being points. Now imagine a square stencil which is as large or larger than a pixel's area, which you put on top of the source image. What's seen though the stencil is averaged, yielding the output for the position the stencil was placed at. lux' area decimation is limited to stencil positions where the stencil's boundaries are parallel to the image's axes, which is fine for downscaling.

This method can be used to produce an 'alternative basis function' for a vspline evaluator object: an evaluator receives real coordinates, uses their integral part for picking a subset of coefficients and the remainder for weighting the coefficients. The weighting is done with the basis function.

The 'alternative basis function' used for area decimation is also available for animated sequence display (use --decimate_area=yes) and it's the default for still images which aren't made with B&A image splining, so it's quite widespread throughout lux, simply because it does the job well and is fast to compute. Animated sequences aren't rendered this way by default, because bilinear interpolation is a tad faster.

Note that when you're using area decimation for the image or quality pyramid, the degree of the splines in this pyramid will be forced to one, and the scaling step will be capped at 2.0.

convolving basis functors ('CBFs')

This is the second group of decimators I implemented after opening vspline up for alternative basis functions. In contrast to area decimation, it operates with spline coefficients, and requires more time to compute. To use it, pass -2 or less for bls_i_spline_decimator. Again we have the option to use this downscaling method for pyramid generation for 'normal' view scaling (pass -2 to pyramid_smoothing_level), but it's not available for animated sequences, where it would be too slow.

Technically, the basis function for this decimator works by first calculating the 'regular' b-spline basis function values which would be appropriate to evaluate a spline at a given locus, and then proceeds to convolve this set of values with a small FIR kernel. The convolution of the locus-dependent b-spline basis function values with the low-pass can be done efficiently (both are separable), so overall the computation of the decimation is not too expensive, coming out near a pentic b-spline. And the results, for decimation, look good. The drawback here is that due to the use of a specific fixed low-pass kernel, the filter removes roughly the upper half of the spectrum, which is ideal for decimation to half-size, but unnecessarily much for decimation steps below two. Visually, this is not a big issue, because it only affects the distribution of frequency content to the pyramid levels, whereas the final output always encompasses all pyramid levels: the laplacian pyramid holds the differences of each pyramid level and the up-scaled level above it, so this datum is affected by the concrete filter, but the final 'collapse' of the pyramid takes all levels into account.

There are now additional downscaling filters available, which are activated by passing negative numbers and work well with scaling steps near 2.0:

  • pass -3 to use the biomial kernel ( 1/16 * ( 1 , 4 , 6 , 4 , 1 ) )
  • pass -4 for an 'optimal Burt filter'. This is taken from vigra, see the function vigra::initBurtFilter, online docu at
  • pass -(4*N-1) with N >= 2, to use an FIR halfband filter with (4*N-1) taps. The filter is constructed using the method given in It's a truncated sinc with a hamming window. Try -7 or -11, and only proceed further if you have good reason to do so. These decimators also don't work correctly with my current state of aRGBA code, so best avoid them for image blending and fusing.


lux forms the levels of the laplacian pyramid by calculating the difference between two continuous functions representing the current 'top' level and the current 'bottom' level. These two functions take the place of the two levels of the gaussian pyramid - the bottom level as-is and the top level 'expanded' to the same size. The bottom level function will only ever be evaluated at on-grid positions, so it can be a spline of any degree. The top level function should provide a surface which represents a low-pass-filtered version of the bottom level. To provide this function, lux takes a special approach:

The first step, the creation of the top spline, is standard pyramid-building fare: the decimator is a functor which produces a low-pass-filtered version of the bottom level signal, the resulting low-pass-filtered signal is sampled at positions corresponding to the target grid (the top spline's coefficients). After this step is complete, the top spline can be evaluated directly, but this will only ever happen if it is occuring in the 'bottom' position. This step is the equivalent of a 'reduction' step in the conventional alogorithm, where a low-pass-filtered 'bottom' level is decimated to populate to 'top' level. The spline here is - per default - of degree one, and I haven't managed to find a way to safely use higher spline degrees for the purpose: b-splines can overshoot or undershoot the range of the values they were built from, and in 'unlucky' constellations, this can cascade into ringing artifacts spoiling the result. But there is a way to avoid the over/undershoots, at the expense of accepting the suppression of some higher-frequency content: this can be achieved by omitting the prefiltering. A spline is always bound to the convex hull of it's coefficients, so if the knot point values are taken as coeffcients the spline can't over/undershoot, but the interpolation criterion is no longer fulfilled, and the resulting signal is low-pass-filtered. We'll see that this is not a grave problem, and in the end we don't actually lose anything, because the 'laplacian' conserves the entire signal - only it's separation into the different layers of the laplacian pyramid is affected.

So now for step two: the shift. Shifting a b-spline (this is specific to vspline, and not a generally used term) means to reinterpret it's coefficients as coefficients of a spline of a different degree. The top-level spline is degree-1, so evaluating it directly would use bilinear interpolation, which is not very smooth. So lux shifts this spline up to the given 'i_spline_shift_to' degree (gleaned from bls_i_spline_shift_to). If the new degree is two or greater, the spline will be smooth - ever smoother with rising degree - but at the same time it will lose some high frequency content. The default of two for bls_i_spline_degree is a compromise, providing a curve which is continuous in the first derivative and has only little high frequency attenuation. The additional slight low-pass on the 'top' level makes the transfer function slightly less 'sharp', but this does not harm the algorithm's performance and it does not degrade the final outcome, because the laplacian pyramid is made up from differences of a level and the expanded level above it - if the expansion produces a frequency pattern which is not precisely the difference of two gaussian levels, the difference comes out differently, but the final summation still yields the original image.

To sum up the pyramid building code: we use a choosable 'decimator' to 'contract' a pyramid level to a smaller level 'above' it, where the 'decimator' is some kind of low pass filter, which will usually resemble a gaussian - in lux, we can use small binomials, a Burt filter or b-spline reconstruction kernels. To form the analogon of a laplacian level, we expand a 'top' level by building a degree-1 b-spline over it's values and, optionally, shifting this spline up for smoothness. Then we evaluate at positions corresponding to the 'bottom' level: we interpolate from the 'top' signal to an expanded signal with as many samples as the 'bottom' signal we want to subtract it from. This expanded top level is then subtracted from the bottom level, yielding the analogon of the laplacian level.

It's important to proceed just in the same way when 'collapsing' the 'laplacian' pyramid: the 'current' level (starting at the very tip of the pyramid) is initially a degree-1 spline - aka a plain image. This spline is shifted up and then evaluated at positions corresponding to the grid positions of the 'next level down', providing the expanded signal to which the next laplacian level is added in the course of the collapse. The summed signal is, again, interpreted a degree-1 spline which is up-shifted and expanded to the next level to be added with the next laplacian level etc - until the algorithm arrives at level zero, where the final laplacian level is added and the iteration ends, yielding the result. Just running the collapse of the laplacian pyramid would restore the image which was initially passed in as one partial image of the set which is to be 'splined' together. To affect the blending of several images, the laplacian levels are multiplied with a weighting function, so that all weights applied at a given grid position of all laplacian levels sum up to one. This weighting is taken from the 'quality pyramid', a gaussian pyramid of weighting terms in the range of zero to one. The weights can be generated in an arbitrary fashion, but their sum must not exceed one. Where the sum of weights is zero, lux assumes that no partial image has a visible contribution and renders the corresponding pixel transparent.


As described in bls_i_spline_degree, the image splines in the equivalent of the gaussian pyramid are degree-1, so plain image data without prefilter applied. When a 'gaussian' level is expanded to be subtracted from the level below it (to form a laplacian level), the spline may be 'shifted' to a higher degree, resulting in smoothing of the signal. bls_i_spline_shift_to sets the degree to which the spline is shifted, and the default of two is a good compromise.


This option will accept the same range of values as bls_i_spline_decimator, with the same semantics, but it's used for the 'quality' pyramids in the modified B&A algorithm - the pyramids of quality values derived from the initial per-pixel weights, which are used in the algorithm to weight the laplacian pyramids before they are collapsed into the final result.

Again the default is to use a small binomial (setting -2), which works equally well for the quality pyramids.


This option will accept the same range of values as bls_i_spline_degree, with the same semantics, but it's used for the 'quality' pyramids in the modified B&A algorithm - the pyramids of quality values derived from the initial per-pixel weights, which are used in the algorithm to weight the laplacian pyramids before they are collapsed into the final result. To my surprise it seems okay to use spline degrees higher than one in this slot, whereas for the image spline that results in flawed output. Note that the data in the image and quality pyramid are 'restored' to degree-1 after the pyramid construction is done.


The splines in the quality pyramid are also degree-1, and 'shifting' them up smoothes the quality, or weighting, signal, while the summation-to-unity property is upheld. Again, degree 2 is a good compromise.


This option only has an effect when rendering quality is set to automatic (--auto_quality=yes). It sets the 'frame rendering time budget' (in msec). When this option is not set, lux automatically settles on a budget which is just some 20% under the GPU frame time (so, if your system is running 50fps, GPU frame time is 20msec and the budget will be fixed near 16msec). If you pass a smaller value here, you can force lux to render frames within the budgeted time, if necessary using 'moving image scaling' to sacrifice image quality for speed. This will only work if 'moving_image_scaling' is not fixed, that's why --auto_quality=yes is needed. Passing a value larger than the GPU frame time will not have an effect unless the actual frame rendering time exceeds the GPU frame time. Then your display may start stuttering. Using 'budget' with small values can be helpful to force lux' resource use down beyond the default. If you're going too low here, you'll get blurred and unstable animations.

The default budget may be too conservative - you can try raising the value to just under the GPU frame time, so to just under 20 for 50fps, but the closer you get to the GPU frame time, the more likely you are to see stutter due to dropped frames.


This option affects the creation of image pyramids used to display scaled-down views, and also the creation of an interpolator for magnified views. It's set to 'yes' by default. Lux processes incoming image data in several stages:

The first stage is to read the data from disk 'as they are' - meaning that, if the image file contains, e.g., 8-bit values, the image data are read as 8-bit values and placed into memory as such. From that moment onward, lux can display the data, but it may decide to postpone the display if producing a view from the raw data is considered problematic - for example, because it needs strong down-scaling, which would result in aliasing.

The second - optional - stage is to build the 'raw pyramid'. This is a single-precision float image pyramid starting with a scaled-down version of the raw image. The raw image is retained and used for magnifying views or slightly down-scaled views, and the pyramid is used for views which are down-scaled more. This division avoids converting the raw data at 'level 0' to single-precision float because the resulting level-0 float image would take up a lot of space and the float data can also be calculated 'on the fly': The 'damage' of quanitzation has already been done and can't be undone, and conversion to higher precision data will not mitigate it, so there is no point in actually performing it, apart from avoiding the (costly) transformation from one data type (the more quantized type used for storage, like unsigned char) to another (the less quantized one used for further processing, like single precision float). So retaining the raw data and converting to float when the float data are actually needed will add a bit of processing time (for the conversion) but save memory (because the raw data typically take up less memory space).

So the second-stage representation of the image data is still quite compact, but interpolation is limited to bilinear interpolation for animated views and a shifted spline (shifted to degree 2, which is smooth, but slightly blurred because of the omission of the prefilter) for still images. Why so? We can't correctly prefilter the coefficients, unless we accept additional degradation due to quantization of the result, and unless we clamp the result to the range of the raw data type, which, after all, can't hold values outside it's range. If we use a shifted spline here, the resulting curve will remain inside the convex hull of the (unmodified) coefficients and therefore in-range. Note that is only one way of dealing with the problem, and the only one that lux uses for now.

The creation of the 'raw pyramid' is only possible for images in linear RGB, or when sRGB images are processed with linear mathematics (see process_linear). If that is not the case, the second-stage representation only contains the raw image data and the creation of the 'raw pyramid' is suppressed altogether.

How fast frames based on the second-stage representation render can vary - several factors are involved, and it depends on the target system which of the factors has more influence on the final speed of the rendering process:

Lux usually considers the stage-two representation as 'good enough' for all types of on-screen display, because both magnified and - possibly - scaled-down views can now be rendered in 'decent' quality. Creation of snapshots, stitches, fusions etc. is postponed until the image data is processed to the final stage, which may involve proceeding to stage three:

Finally, if build_pyramids is passed 'yes' (the default), the 'final' pyramid is built. This pyramid consists of a 'complete' image pyramid starting with level zero, all in single-precision float, and, possibly, an additional b-spline for magnified views - the latter is only created for spline degrees larger than one, where prefiltering is needed. The level-0 raw data are now discarded, and all rendering uses the level-three representation. When this stage is reached, 'interpolator building' is complete (you'll see the change in the status line).

When passing --build_pyramids=no, stage three is omitted, and lux has to make do with stage-two data. Oftentimes this is perfectly good enough, and it's a fair bit quicker to set up and takes less memory, while sacrifices in speed and quality are small. It's a reasonable choice for simply viewing images, and it's good for creating output which does not need magnification. Where magnified still images are needed, the (very) slight blur due to shifting may be a reason to avoid it - you'll have to judge yourself whether this is an issue for you.

On the other hand, producing the stage-three representation is quite fast for 'normal' images, and usually rendering times come out faster if the stage-three representation is available and image quality is a bit better. So my advice is to only use --build_pyramids=no when you have good reason to do so, like when displaying very large images where memory consumption is an issue.

There is one drawback of using 'proper' b-splines for magnifying views: they can overshoot if the image data aren't sufficiently band-limited. Sensor data are usually sufficiently band-limited, but processed images tend to lose that quality, as they become sharpened or scaled down - and artificial images can lack it altogether: the 'worst' kind of material is vector graphics without antialiasing. The resulting renditions show 'ringing artifacts'. If your data aren't band-limited, your best option may be to avoid 'proper' b-splines.


Like build_pyramids, this option affects the creation of image pyramids. The default here is 'yes', but by passing 'no' you can stop lux from building any pyramids in the level-two representation, limiting the level-two representation to the raw data and what can be derived immediately from them, as described in the section above.

If build_pyramids is set to 'yes', this option ultimately has no effect, only the sequence of actions used internally to set up the stage-three representation is changed around a bit. But if build_pyramids is also set to 'no', there will be no image pyramids at all, and all rendering will be done from the raw image data, which may produce quality issues like aliasing for scaled-down views or slight blur for magnified views. Working without image pyramids may be acceptable for just viewing images, but especially the aliasing with down-scaled images can be annoying. Omitting the image pyramids is a bit of a desperate measure, which is only really called for if you're in a very tight spot memory- or CPU-wise and can't even afford the stage-two representation with it's 'abbreviated' pyramid. But, yet again, you may want to choose to do so, and lux lets you specify it.

Please note that lux may set build_raw_pyramids to 'no' automatically if building the 'raw' pyramid can't be done correctly from the raw data. This happens if the image data are sRGB and process_linear is set to 'yes' (a common combination) - or if process_linear is set to 'no'. If this happens and build_pyramids is also set to 'no', all rendering will be based on the raw image data, with no pyramid-based downscaling at all.

If all of this 'pyramid stuff' sounds complicated and confusing, please keep in mind that most image viewers simply give you no choice - they'll usually work from the raw data and use nearest-neighbour or at best bilinear interpolation. Lux makes an attempt to offer high quality visuals by default, and to offer compromises where this is not feasible, hence the pyramid-building options. The defaults make it as easy to use as a viewer which leaves you no choice in the matter, and, at the same time, provide high quality output. The pyramid-building options are for situations where the defaults don't work for you.


sets the threshold for click and drag processing, producing a 'dead zone' near the initial click position where there is no effect. If total displacement is less than click_drag_threshold, the click-drag is silently ignored, and if the threshold is exceeded, the displacement is diminished by the threshold. For now, we apply this logic for both primary and secondary click-drag (one might use different thresholds). The thresholding is done by looking at the absolute displacement, which makes the code slightly more complex, but more intuitive than looking at dx and dy individually.


When building pyramids with a decimator based on a vspline 'convolving basis functor' (CBF for short), the decimator works on a b-spline substrate, rather than on un-prefilered image data. cbf_degree sets the spline degree for this spline. The default, a cubic spline, is already quite close to ideal, but takes a while to compute - it's support is 16 coefficients, after all. If pyramid buildup time is an issue, you can lower this degree - down to one is 'sensible'. On the other hand, if you want to get the best quality possible, increase the degree, but expect deminishing returns from the increase as the degree rises, and eventually, adverse effects if the spline degree gets too high to 'handle' it in single precision float.

So you may ask "what's a CBF?" It stands for 'Convolving Basis Function' and it's a hybrid between simple b-spline evaluation and convolution with an FIR filter. You can make lux use CBFs by passing pyramid_smoothing_level -2 or smaller, please see there for the meaning of the possible values. The CBFs lux offers are based on a variety of low-pass filters, and using them is equivalent to first filtering the image data with this low-pass filter, and then building the spline over the result.

Note that all of this only affects down-scaled views, so to observe any effects related to this option, you need zoom levels of less than one. And spotting the difference is really quite hard.


This is one of lux' 'compound' options, or 'actions': it bundles a set of several options which would typically be used together to a specific end. You have to pass 'yes' to trigger it, passing 'no' has no effect at all.

Here, what's intended is to create a 'faux bracket' output image (see faux_bracket), with 'faux' partial images of -2Ev, 0Ev and +2Ev. This image is made as a 'source-like' rendition, so if you're processing a single image, the output will have the same size, projection etc. as the input - and if you're working with PTO input, the output will reflect the settings in the p-line. The output is made immediately, the output file's name will depend on whatever settings are active (default is to add a suffix the input file name, snapshot naming options are honoured), and after it's ready, lux proceeds to the next image - if any. You can think of this action like the body of a loop, which is applied to all images, so you can use it on sets of images:

lux --compress=yes *.pto

will make faux brackets of all pto files in the current folder. You can achieve the same effect by passing several 'normal' options, and if you need more control (e.g. different Ev values or more partials) you can't use 'compress' - but it's quick and easy and the settings are reasonable, so it may be 'good enough solution for 90% of the jobs' - or a starting point from which you can start your own tweaking.

When processing PTO files describing panoramas, lux will internally create a stitched rendition and then apply the faux bracketing to it - the result will be the same as if you first produced stitched output in an image file and then applied the faux bracketing in a second lux invocation. There is a small difference, though: If the 'compress' action is used, the intermediate stitch is kept in single precision float with as much dynamic range as the source images have to offer, while writing intermediate images to disk will usually produce quantization and range compression errors, depending on the format used.


This is a parameter influencing the 'weighting' when producing an image fusion, and it's counterpart is 'exposure_weight'. The default is to use only exposure_weight and and leave contrast_weight zero. This is used for exposure fusion, working on exposure brackets. contrast_weight is also usually used 'alone' - so, with exposure_weight set to zero, and it's used for 'focus stacks', exposure series done with the same brightness but different focus. Both parameters can be mixed, please refer to the section on exposure_weight for more about mixing them - here I'll explain the use for focus stacks:

Image fusion with the Burt and Adelson image splining algorithm is based on a 'quality' measurement, which determines how much each of the source images should contribute to the final outcome. This is the basis, the algorithm with it's use of image pyramids only makes sure that the result does not contain ungainly discontinuities, as they would occur when the quality measure is used on a per-pixel basis. What's actually used as the quality measure is independent from the splining algorithm, the most common measures are either spatial (take this section of this image, that section of that image) like in panorama stitching, or based on local image qualities (like contrast, entropy, brightness) as used for image fusion. Using contrast weight is based on measuring the local contrast and basing the quality judgement on it. Classically, local contrast is measured with a small FIR filter kernel, and the convolution with the kernel gives a contrast value for every pixel. This is a discrete method - the result is per-pixel and there are no immediate subpixel values available. Lux does instead use the gradient of a b-spline, which is continuously defined and therefore better for the modified B&A algorithm in lux, which needs continuous pyramid levels to calculate the laplacian pyramid on-the-fly for arbitrary coordinates.

My tests with the b-spline-based approach for focus stacking have showed promising results, but I haven't found the time to do extensive comparisons with the 'classic' implementation, which would be a nice topic for a paper - anyone interested?

Using contrast weighting for focus stacks is easily done with lux, the easiest way is to just pass in a .pto with a registered focus stack together with --focus_stack=yes, an abbreviated 'action' parameter setting all other arguments needed to immediately produce output to an image file:

lux --focus_stack=yes my.pto

this will produce an image file my.pto.lux.1.fused.jpg with the fused result.

If you want to set the parameters separately, keep in mind that exposure_weight is 1.0 per default, which does not change when you pass contrast_weight, and contrast_weight is 'dominant', usually producing numerically larger results. So to get a 'pure' focus stack, use

lux --blending=hdr --exposure_weight=0 --contrast_weight=1 my.pto

The 'blending=hdr' is a bit misleading, after all we're not doing HDR blending here - maybe I'll rename this blending mode to 'fusion', which would be more appropriate. The naming was done when it was either panoramas (blending=ranked) or HDR blending (blending=hdr), and exposure fusion, focus stacks and deghosting were later additions.


This option is used by lux internally, and is set to 'true' when PTO input is processed which has cropping active in the p-line specification of the output. If you want to pass cropping information on the command line, you need to pass --cropping_active=yes, plus the cropping parameters (uncropped_width, uncropped_height, uncropped_hfov, uncropped_vfov, crop_x0, crop_y0, crop_x1, crop_y1).

crop_x0, crop_y0, crop_x1, crop_y1

These options are used by lux internally to process the extent of a cropping window specified in a PTO file's p-line. These values are integer values, in pixel units. Passing them on the command line has no effect unless you also pass cropping_active. The values start with zero, and the values ending in '1' give the pixel location after the cropped area, so with crop_x0 = 0 and crop_x1 = 100 you specify a width of the cropped area of 100 pixels, starting at the left margin. So, crop_x1 - crop_x0 yields the width of the cropped area, and this must coincide with the width of the image data in the source image. The same holds true for crop_y1 - crop_y0 and the image height.

As the source image's size can be gleaned from the image file, one might argue that it would be sufficient to pass crop_x0 and crop_y0. I opted to pass the full set, producing a bit of redundancy to allow for consistency checking. The extent of the uncropped image is passed with the options 'uncropped_width' and 'uncropped_height'.


If set to 'yes' (the default) lux will crossfade to the next image.


This factor affects the speed of the crossfade. For every crossfading step, this factor is added to a variable which is initially zero, and when it reaches one, the crossfade is complete.

The default here is 0.05, which is brief, but noticeable with a 60Hz monitor.

The crossfading is done on the GPU: when the display of an image ends, the last texture which was displayed is kept active and set aside. When the next image is displayed, this 'lingering' texture is assigned to a sprite and displayed on top of the new image. This is repeated with ever-decreasing opacity of the overlay sprite, until finally the crossfade is ready. Only then the lingering texture is released. Since the crossfading is GPU-only, it is fast and produces no CPU load.

cube_left, cube_right, cube_top, cube_bottom, cube_front, cube_back

These options are used to pass six square rectilinear images to be used as cube faces for a cubemap. The options only work together with projection=cubemap and have no effect otherwise. To display a cubemap, all six cube faces are needed. Here's an example for a lux ini file for a cubemap:


Cubemaps are a good way of presenting 360X180 content - they are quite fast to render and reasonably compact, and the individual images have relatively little distortion, so editing them with 'normal' image processing software is possible.

Cubemaps in lux are similar to openGL cubemaps, but they don't live on the GPU - as all other image processing in lux, apart from the final display of the rendered textures, is done on the CPU and stored in system RAM. openGL cubemaps consist of six mipmaps, and lux cubemaps consist of six image pyramids, of the same special type used throughout lux: they are built from b-splines, rather than 'plain' raster data.

When lux combines several images into a synoptic display, it usually does that with a facet map. It's entirely possible to pass six square image to lux as a facet map to get a 360X180 degree view, but the cube geometry allows for a few time-savers, which are combined with other minor constraints on the cube face images, resulting in faster processing. Cubemaps are mainly intended as a format to store 360X180 degree panoramas so that they can be displayed efficiently.

The cube faces have to be in rectilinear projection and they have to be square. All cube faces have to have the same field of view (see cubeface_fov), which should ideally be a tad more than 90 degrees, which helps to avoid edge and vertex errors. They have to be oriented in a specific way, explained in the documentation like this:

picture yourself inside the cube. All faces 'around' you should be upright when you face them. When facing the front face, looking up or down should show the top or bottom face 'the right way round',so you can sweep your gaze from zenith to nadir with the cube faces strung up in the 'right' orientation.

The documentation has a full chapter on cubemaps, please refer to that for more.


Value, in degrees, for the field of view of all six cube faces in a cubemap. So, to the cubemap example above, you'd add one line like


To specify field of view slightly larger than 90 degrees. This is just the field of view you have to pass as 'hfov_view' when you're making cubemaps with lux from some other 360X180 image.

Why the extra? To render target pixels, lux has to decide from which cube face it picks the source pixel. If the source pixel is precisely on an edge or vertex of the cube's face, the interpolation used at this locus is subject to boundary effects: the interpolators are each individual and 'know' nothing of the other cube faces. So an edge pixel picked from the right edge of the front cube face and one from the left edge of the right cube face may share the same locus, but come out differently due to the boundary treatment (REFLECT in cubemaps). If the source images for the cube faces are slightly larger than 90 degrees, there is a bit of redundance, and the source pixel can't be 'on the edge' of the source image - hence, there are no boundary effects, because they have been 'pushed out' to an area which is never used. If you want to cut it very fine, allow just a few extra pixels beyond 90 degrees. And you may find this perfectionism excessive and use precisely 90 degrees - the flaws are really very minor, you'll have difficulties spotting them.

Just to be perfectly clear about it: the square images have to be rendered to show the slightly larger field of view which you pass here - if your cube face images have a field of view of precisely 90 degrees, you must pass 90.0 here or omit the option.


This is a boolean flag which defaults to 'no'. Setting it to 'yes' switches to the use of 'area decimation' for animated sequences.

I've already explained area decimation in the chapter on 'bls_i_spline_decimator', but I'll repeat it here:

To describe area decimation - as used by lux - graphically, imagine the source image as composed of square pixels having a specific area, rather than being points. Now imagine a square stencil which is as large or larger than a pixel's area, which you put on top of the source image. What's seen though the stencil is averaged, yielding the output for the position the stencil was placed at. lux' area decimation is limited to stencil positions where the stencil's boundaries are parallel to the image's axes, which is fine for downscaling.

This method can be used to produce an 'alternative basis function' for a vspline evaluator object: an evaluator receives real coordinates, uses their integral part for picking a subset of coefficients and the remainder for weighting the coefficients. The weighting is done with the basis function.

What's interesting in this context is that area decimation is a scalable filter - the implementation in lux has a range for the stencil window's edges from 1.0 to 2.0, so stencil areas up to an area of 4.0 can be processed. This is precisely the range needed to cover a standard pyramid step. The standard decimation in lux simply finds the pyramid level closest in resolution to what's needed for the current downscaling factor and uses bilinear interpolation with this level, resulting in a noticeable discontinuity in sharpness when the switch is made to a different pyramid level. With area decimation, the procedure is different: the data are always taken from the next-lower pyramid level (so, with higher resolution) and the area decimator is adapted to the precise downscaling factor needed for the current view. This lessens the discontinuity when switching to a different pyramid level, especially if the pyramid was also created with area decimation and a scaling step of two, which is the default. There is still a noticeable switch, because the stencil window does not usually coincide with pixel boundaries. All of this is quite academic - to see the switching from one pyramid level to the other you may have to use the magnifying glass (press 'I' and zoom out). So using area decimation for animated sequences is nice-to-have, but since it takes a bit of extra time to compute, it's not the default, and you probably won't want to use it due to the extra computational cost, given that the visible effect is quite small. But if you want to produce the best quality of zoom-in or zoom-out sequence, you may want to consider it.


This is one of lux' 'compound' options, or 'actions': it bundles a set of several options which would typically be used together to a specific end. You have to pass 'yes' to trigger it, passing 'no' has no effect at all. It creates a deghosted image from an image set, using lux' quorate blending mode.

Passing --deghost=yes needs a facet map to work on - typically you'd pass a PTO file. The PTO file should refer to a serial shot with constant Ev, where one image (or the minority of images) contains unwanted content, so-called 'ghosts': people walking through the set, birds or insects flying past, cars...

The set of images is loaded with --blending=quorate, and a snapshot is taken storing the deghosted image. The processing works on the pixel level, so if you have the car at the left in the first image, then in the middle in the next image and so forth, lux removes the 'car pixels' altogether. Note that deghosting with lux needs at least three source images in the PTO file to come to a 'quorate' decision - with only two images, it can't decide where the 'outlier' is. Note also that registration has to be very good, like for exposure fusions - otherwise outlier detection won't be possible.

And, to be completely honest: the 'ghost' content is not removed 'altogether' but it's contribution is reduced to a very small amount, which is usually invisible. Note that if the 'non-ghost' content is not consistent, the 'ghost content' can't be identified properly and lux' deghosting method won't work correctly.

exposure_sigma, exposure_mu

These two values are used for exposure fusions and correspond to the sigma and mu parameters given in the literature. Exposure fusions rely on a measurement of 'well-exposedness' in each of the partial, or source, images, which is translated into weights, which in turn are used as the base of the 'quality pyramid' in the modified Burt and Adelson image splining algorithm which lux uses. Well-exposedness is easily explained: the closer a pixel's brightness is near a 'middle' exposure, the better is it's 'exposedness'. But this is not a mathematically precise definition: What does 'middle' mean? And how does one quantify 'near'? exposure_sigma and exposure_mu are used for this purpose, together with a third processing step which is not currently parametrizable by the user.

Let's start with this last - opaque - step, which is called 'grey projection'. Above, it says that well-exposedness is based on a pixel's brightness. But a pixel usually has three channels: red, green and blue. So if we want to base the well-exposedness measure on 'brightness' we have to form a notion of how we combine the three colour channels into a single brightness value. For exposure fusions, lux uses a simple and fast grey projector: it's simply the average of all three colour channels.

The next notion we have to concretize is the 'middle exposure'. Exposure fusion in lux is done with images in sRGB colour space, with an assumed range of 0.0 to 255.0, even though actual values may exceed the range without ill effects (they will come out with very low weights). This range is mapped to the range of 0.0 to 1.0, and the default for a 'middle exposure' is right in the center, at 0.5. Now here's where exposure_mu comes in: it allows to set a different 'middle exposure' reference point. If you pass, say, 0.25, pixels with brightness 0.25 will be considered most 'well-exposed', and will contribute more to the final outcome than, say, pixels with brightness 0.5, which would have been favoured with the default setting.

Finally we have to decide how strongly the differences in well-exposedness should discriminate: we might want to favour the best-exposed pixel almost exclusively, ignoring less well-exposed pixels, and on the other hand we might only give slightly more weight to better-exposed pixels, going more for something like an average. This is where exposure_sigma comes in: lux uses a gaussian bell curve for weighting, which has it's peak where exposure_mu is set. The width of the bell is steered by exposure_sigma, the bell curve's standard deviation. Small values will make for strong discrimination and strong preference of well-exposed pixels, large values will produce little discrimination and an overall effect closer to an average. the default value of 0.2 is, again, from the literature, and it's a compromise which has turned out to produce a pleasing output - but it's not necessarily optimal for every exposure fusion. Hence the two parameters.

enfuse, the 'classic' exposure fusion program used by hugin, offers more flexibility: it does, for example, allow for the choice of grey projector, which is fixed in lux, and it has more quality measures. If you want to fathom the depths of exposure fusion with a lot of tweaking opportunities, you're probably better off using enfuse: lux only makes an effort to produce a reasonably simple and straightforward subset of parameters for the purpose, which are fast to compute and produce presentable results.

So, while the explanation above will suit the mathematically-minded, how does it look? Passing exposure_mu below 0.5 will result in a darker output, because, overall, more pixels with low brightness will be favoured and contribute more to the final outcome. Passing exposure_mu above .5 results in a brighter result. Passing 0 for exposure_mu will look very similar to the darkest image in your bracket, passing 1 will look similar to the brightest one. Going way beyond - say 10, will - surprisingly - not come out 'very bright' - instead the gaussian weighting curve will make the partials similarly 'un-well-exposed', resulting in a result reminiscent of an average.

Passing exposure_sigma less than .2 will result in amplified contrasts and more noise in the image - small variations in brightness may make a lot of difference in the weighting factor, which can amplify noise, but also strengthens contrast. The look can be quite 'dramatic' and get an unnatural feel if you overdo it. By the way - only the absolute value of exposure_sigma is relevant, so you can't get interestingly different results by passing negative values. With exposure_sigma larger than 0.2, the image will look duller and noise and contrast will lessen, which may be desirable: you may want to use exposure fusion to reduce noise, rather than to compress the dynamic range. You can even pass a serial shot with identically exposed images and use the exposure fusion exclusively for noise removal by averaging, but because noise appears preferably in areas with low brightness, having long exposures in the set helps a lot with noise removal, because they will have captured the scene with less noise in the first place, and they will receive more 'weight' in these areas because they'll have the content closer to 'middle brightness'. There's no harm in having an image in the set which is overexposed everywhere but in the darkest areas - overexposed parts will simply not contribute, while the darkest areas will be taken from this 'shadow shot' nearly exclusively, and with little noise. Same holds true for a 'highlight shot' where everything else is nearly black: it will provide the highlights, the rest will not contribute.

In the end it's up to you as the artist to decide what effect you want. Lux tries to occupy the middle ground of offering some tweaking potential without overdoing it.


Lux currently (mid 2021) knows two criteria to assign a 'quality' measure to pixels in the source images used for exposure fusion, or image fusion in general. Exposure weight, a measure for the well-exposedness, is the more common one, and it's the one which lux uses by default. The other measurement lux can use is the local contrast, see 'contrast_weight' for a more detailed discussion - that measure is mainly used for focus stacks.

Most of the time you'll choose either one or the other, but it's also possible to mix them. When using, e.g., the well-exposedness criterion only, you pass exposure_weight as 1.0, and contrast_weight as 0.0 - this is also the default. If you want to mix both criteria, you can pass nonzero values to both. You'll have to do some experimentation to come up with a working combination - if you pass 1.0 to both arguments, it's likely the contrast_weight will have very little effect, because the contrast values which lux uses are usually numerically smaller than the well-exposedness values, so you may want to try using much larger values for contrast_weight, when it's combined with exposure_weight. Note that contrast weighting in lux is done with different methods to what enfuse uses, so the contrast_weight parameter from enfuse does not translate 1:1 to lux: enfuse uses a small FIR filter kernel to measure contrast in a pixel's vicinity, whereas lux uses the gradient magnitude of a b-spline.

Another point is that you'd typically use an exposure bracket with exposure_weight: you want to combine several images with different brightness. For a focus stack done with contrast_weight, you do instead use a focus stack, a set of images taken with slightly different focus point. If you feed an exposure bracket to code which is really meant for focus stacking, your result will reflect on the local contrast of the several exposures, which may even have unwanted effect like increased noise. On the other hand, it may enhance detail and give the image a sharper and 'clearer' look. Again, it's up to you to judge if the effect works for you.


The image pyramids used by lux don't have to use scaling steps of two, you're free to use different values, typically larger than one and up to two. This parameter sets the scaling step for the pyramids used by the modified Burt and Adelson image splining algorithm, the algorithm lux uses to fuse or blend multiple images. This option is not available in enfuse, which uses the 'standard' scaling by a factor of two between pyramid steps - in fact the algorithm employed by enfuse is highly optimized to do this decimation very efficiently, whereas lux uses a more general-purpose approach, which is slower but more flexible. You can, in fact, use any floating point value above one, so you're not limited to whole numbers.

So here's another tweaking opportunity, which may or may not help you - lowering the value below two will create more pyramid steps, and you can expect less level-switching artifacts, but more blurring. Again the results may be hard to see without strong magnification. Approaching one, the number of pyramid levels will increase beyond a manageable amount, and processing times will skyrocket. This can't be helped, so dont go to close to one. Another interesting option which you can explore with all decimators apart from 'area decimation' is to raise the scaling step above two. This creates a shallower pyramid, and if you raise the value too much, the multilevel blending effect will no longer work: you'll have noticeable discontinuities at the seams. So why would you want a shallower pyramid? For speed. You won't save much time - the bulk of processing goes to the bottom level of the pyramid, which is by far the largest and remains at full resolution, but if processing time is very limited, you may want to try this route.

Note that this value is used internally by the fusion/blending code; these pyramid levels are not directly visible to the user at all, and the algorithm's inner workings mostly keep peculiarities of the downscaling code from having visible effects. The effects of 'messing' with pyramid scaling are more prominent with the image pyramids used for actually viewing images, see 'pyramid_scaling_step'.


This is yet another parameter which affects the image pyramids used internally by the modified Burt and Adelson algorithm in lux. It affects all processes using this algorithm - exposure fusions, image stitching and focus stacks. Image pyramids can be built to have a top layer of just a single pixel, but I found that the results are usually better when the top level still consists of an array of pixels, to keep some 'residual locality'. My choice for the default of sixteen is an informed guess and seems to work well (I've done a fair bit of trial-and-error with it), but following lux' philosophy of 'keep it configurable', and with only so much testing I can do myself, there is room here for more experimentation, and the default may change. The 'floor' in the name refers to the numerical value of the smallest pyramid level, so the level sizes go down to this 'floor' value as pyramid creation progresses. Nevertheless, this smallest level is - by conventional understanding - at the 'top' of the pyramid.

If you want to experiment with this parameter, sensible values for exposure fusion are small numbers starting with one (smaller values produce an exception), and if the value becomes too large you'll get what is known as 'local effects', noticeable small-scale gradients in the image, especially in uniform regions near high-contrast areas, often looking like 'halos'. The maximum value you might use is near the size of the image, which results in the 'naive' approach to image fusion: it builds a 'pyramid' with just one level, which is not a pyramid at all. Then, the per-pixel quality measures are immediately applied to individual pixels, with all the associated drawbacks: the result is entirely local. For focus stacks, this may even be desirable, but for exposure stacks it's not a good idea. Do a trial with different settings! Sometimes the effect of a value in the tens or hundreds is even quite nice. It definitely also depends on the content, and the effect of higher values is entirely different for exposure fusions and image stitching.

When it comes to stitching, using a value of one here is not problematic, but when the value gets too large, you'll not get the desired blending, especially not in large uniform areas like the sky, where the 'locality' does not give 'enough room' for an unnoticeable gradient to develop. On the other hand, if your images match well, you may get away with high floor values, and combining them with large scaling steps will blend quite quickly. The literature offers blending schemes with only two levels - one fine level for detail and one coarse level for smoothness - so by playing with the floor and scaling step values you can even approach such extreme schemes. Note that the floor value has to be considered in relation to the size of the output image! If you render to a full HD screen, using a floor value of 16 will have a different effect than when you're rendering to a 10000X5000 panoramic image.


When an image is displayed, there is one important distinction: is it scaled up or down? Or, to express it more technically, is it subsampled or supersampled? If the display process is merely a change in size, you get a plain answer to the question, and with geometrical transformations thrown in, the answer may become 'partly so, partly so'. The lux approach is to come to a clear answer to this question, based on a comparison of pixel 'size' (seen as an area) near the image center and to do the necessary calculations accordingly, accepting possibly suboptimal results towards the image margins.

With the choice made, it's possible to give a measure of the scaling as a simple multiplicative factor, and values above one indicate a magnification. When a digital image is magnified, result pixels which don't coincide with source pixels have to be interpolated, and the method lux uses for this interpolation can be determined by setting the interpolator's 'degree': lux uses b-splines for interpolation, and they are characterized by the spline's degree, which can range from zero to seven in lux. vspline, the underlying b-spline library, can handle spline degrees up to 45, but for lux' purposes such large spline degrees are overkill - you'd have a hard time distinguishing visually between a quadratic (degree 2) and cubic (degree 3) b-spline.

fast_interpolator_degree sets the degree of the interpolator used for animated sequences. This may be separate from the interpolator used for still images (see quality_interpolator_degree), and the default is to use degree one for animated sequences and degree three for still images.

A setting of zero is also known as 'nearest neighbour interpolation'. It's the fastest mode possible, but it produces square-shaped pixels in the output, which is not usually wanted. Nevertheless it's good to be able to display a magnified image like this: you can see each of the source pixels individually and this is often a good way to judge very small details. Many programs displaying images use this mode exclusively.

A setting of one is also known as 'bilinear interpolation', which is a definite step up from degree zero, but when it's used to magnify images a lot, it produces what's known as 'star-shaped artifacts' - the view doesn't look entirely smooth. If the magnification is just 'a few times' you won't notice the adverse effect, it only becomes visible with high magnification. This mode is also quite fast to compute, and altogether it's a good compromise for animated sequences, unless the magnification becomes large.

Degrees above one use 'proper' b-splines, which are smooth. The difference from one degree to the next is small, even degree two and three are hard to tell apart unless you magnify the image very much. It would be desirable to use, say, degree two for animated sequences, but the computational cost rises quadratically with the spline degree, so for animated sequences this is often already too much and results in dropped frames and stutter. You'll have to try it out with your content and system - if it works without dropping frames, use it if you need large magnification. Degrees beyond two don't help a great deal quality-wise.

Keep in mind that this parameter affects magnified views and scaled-down views differently. When the view is scaled down - and this is often all you get to see, with the number of megapixels in modern camera images growing larger and larger - a different process becomes relevant: decimation. Lux' default decimation method is to pick a level from the image pyramid associated with the image which is nearest in resolution to the desired output, and to proceed by interpolation based on this level. The use of an image pyramid reduces aliasing, but switching from one pyramid level to another can be quite noticeable. The interpolation method used for the selected pyramid level is the one fixed by the spline degree, but because we're not magnifying, the effects are different: in animated sequences, a degree of zero for scaled-down views will be unpleasant due to aliasing effects, the image lacks stability and shows artifacts like 'sparkle'. A degree of one is usually fine, and larger degrees don't make too much difference - the star-shaped artifacts are not an issue with scaled-down views.

Apart from the default, which is based on b-splines, lux has a second decimation method: 'area decimation'. With this method, the switch from one pyramid level to another usually becomes less noticeable or even invisible, but it costs more CPU cycles. When this decimation method is used for animated sequences (use --decimate_area=yes) the spline degree given with fast_interpolator_degree is ignored for scaled-down views and only affects magnified views.

This is a good place to add a remark about b-splines. b-splines of degrees zero and one work on the unmodified image data - they require no 'prefiltering'. Higher-degree splines produce interpolated values from the spline's 'coefficients', which are generated from the image data by prefiltering them. This prefiltering step acts as a (mild) high-pass filter. The production of an interpolated datum from the set of coefficients near the interpolation locus acts like a (mild) low-pass, which is precisely as 'strong' as the high pass used for prefiltering - in sum, both filters cancel each other out. The 'magic' of b-splines is that the prefilter has infinite support, but the evaluation of the spline has compact support. This means that a small number of coefficients is sufficient to produce the interpolated value, even though - via the infinite support of the prefilter - all samples of the source image contribute. Of course, contribution of pixels far from the imterpolation locus will be very small, and from a certain distance onward it will be negligible, because it falls below the precision of the data types used. So, in effect, b-spline interpolation has infinite support in the source data, even though calculating it only uses a small set of coefficients. 'Direct' interpolation, on the other hand - calculating a result directly from the source data - can't feasibly have infinite support: it would take too long to compute. So it has to window the data in one form or another and work on the windowed data using some 'abbreviation' of an infinite-support interpolation function (like sinc), and the interpolation function has the added constraint of requiring that near a given discrete locus, the influence of all other neighbouring pixels must become zero.

Obviously, it's desirable to be able to work directly on the unmodified image data and avoid the prefiltering step, which takes a fair amount of CPU time. On the other hand, it's also desirable to use b-splines with degree two and up, because they are 'smoother'. 'Smooth', in mathematical terms, means that they are continuous in their derivatives as well. A degree-zero b-spline is not at all continuous: it's value is constant in a small square area, and likely different in the next one. A degree-one b-spline is already continuous, but it's first derivative is not. A degree-two b-spline is already continuous and has a continuous first derivative. And so it goes on. If we compare the coefficients of a degree-two b-spline with the unmodified data, we can observe that the difference is 'not very large', and small or zero where there is little or no high-frequency content. So we might simply forget about the prefiltering and use the unmodified data instead of the 'proper' coefficients, accepting that the result will lose some high-frequency content. We can expand on this concept by using the coefficients of some spline A with degree X as if they were the coefficients of another spline B with degree Y. If we evaluate B (which is based on coefficients which would, really, be appropriate for degree X), we will get either a smoothed result (if Y is greater than X) or a sharpened one (if Y is smaller than X). Such a result may be desirable, or the difference to the 'true' result may be considered so small that, for a given purpose, it's acceptable. This method is called 'shifting' the spline in vspline (note that this is a term I have coined in vspline, you won't find it in the literature). Rather than 'transporting' the coefficients from one spline to another, the spline object remains the same, but it's nominal degree is changed up or down. After the change it 'looks' like a spline with the modified degree, but it's coefficients do remain as they were. The second form of shifting leaves the spline unaffected but is used during the creation of 'evaluator' objects. Adding shift to the arguments creating an evaluator results in the same evaluator as shifting the spline and then creating an unshifted evaluator from it.

When an image is first loaded from disk into lux, this is done so that the raw data are taken from the image file and placed in memory as they are. We can directly use these data as coefficients for degree-zero or degree-one b-splines, which require no prefiltering, and receive a mathematically correct result. For a degree-two or higher b-spline we would need prefiltering to get a correct result, but we can 'get away' with 'shifting the spline up' one degree: the result will only lose a bit of high frequency content, which is not too noticeable. And this is what lux often does right after the image was loaded, to be able to provide 'first light' as quickly as possible. If the user indicates that 'proper' b-spline coefficients are wanted, the prefiltering is done in the background and the viewer switches to the 'proper' coefficients once they're ready. The user may see a slight change in sharpness when this happens, but this is definitely preferable to a blank screen until the 'proper' coefficients are ready.

Once the 'proper' coefficients are ready, the user may decide to shift the spline up or down (press 'S' or 'Shift+S'). The effect depends on the degree of the spline and the shift. A common scenario is that the spline is degree-one. Shifting it up one degree results in the unmodified data being used for a degree-two spline, with slight blur but smooth and without star-shaped artifacts - and with more CPU load. Shifting down one degree results in nearest-neighbour interpolation: fast, but with bad aliasing - and, with high magnification, square-shaped pixels, which may be desirable to 'see' the colour and arrangement of the raw data.

Shifting affects the 'fast' interpolator and the 'quality' interpolator separately: if you press S/Shift S during an animated sequence, the 'fast' interpolator will be affected, and if you press it while the viewer is at rest, it will affect the 'quality' interpolator. Shifting is a quick way to see if an interpolator with a higher degree will run smoothly: the rendering time depends on the apparent degree of a spline, not on the numerical values of it's coefficients. If you shift a spline to degree two and animated sequences run smoothly, you can expect that a 'genuine' degree-two spline will perform just the same. And if you observe that the slight loss of sharpness that occurs when shifting a spline up doesn't bother you, you can avoid the prefiltering step and work with the shifted spline instead.

So, to sum this up in a nutshell, lux may 'cheat' a bit (shift the spline) to produce first light as soon as it has extracted data from an image file, but it will provide 'true' results as soon as it can. And the user can 'cheat' as well (also by shifting) for whatever purpose. fast_interpolator_degree is what lux will work towards, and then it's the starting point for the user to possibly shift up and down from.

Note that values above 7 will silently be set to 7.


This is the first of a whole series of arguments dealing with 'facet maps'. It's used to introduce the individual facets of a facet map. The first thing that needs explaining here is how an argument can introduce several items. 'facet' is a 'list argument'. It can be passed several times, and each time the value will be added to a list of values - it's like issuing a push_back to a std::vector in C++, or a call to 'append' on a list object in some other language. The order is established by the order of the arguments and remains immutable after initialization, so you can't 'change facets round' later on, and you can't remove facets once they've been added.

'Facet maps' are made from a collection of source, or partial, images which are put together in a synoptic view, and the partials go by the name of 'facet'. The syntax is simple:


But for a 'complete' facet map, you need a few more items, please refer to the 'Facet Maps' section in the main README.

'facet' arguments are usually passed in a separate file, a lux 'ini file', which should have the .lux extension. This file is passed to lux to describe a complete facet map, which is accepted by lux like other 'normal' image files, though it can't be part of a facet map or cubemap itself - the object is not recursive. Why 'usually'? Because this is just a convention: it's usually a good idea to have all information pertaining to a facet map in one place, namely the lux ini file, or lux file for short. But ultimately such a file is merely adding command line arguments, which, in this case, happen to be arguments describing a facet map. If you have several facet maps to display which have arguments in common, you can introduce the shared arguments on the command line (or with a lux ini file read with -c) and only pass the differing ones in the ini file that 'stands for' a given facet map. This goes so far that you can pass an empty file as the lux file which stands for the facet map and pass all arguments on the command line, which may be used as a trick to avoid having to write a lux file for a single facet map when one is made 'on the fly'. Most of the time, though, the lux file will contain all information needed for a specific facet map, and you can think of it as something 'like an image', and also as something 'like a PTO file': PTO files are a good way to 'communicate' facet maps to lux. Lux understands a lot of a PTO script's information (and complains if it can't handle it), and a PTO script is easily made with programs like hugin.


This argument tells lux to apply a multiplicative factor to the intensity values in the given facet after loading the image. The factor can be any positive real value, and is meant to affect intensity values in linear light, even though it will work on non-linear data (which you get when setting --process_linear=no) - your mileage in that case will vary. This does not mean your images have to be in linear RGB - by default lux converts sRGB input to linear RGB on loading the image(s)

This is an optional argument, but if you pass it at all, you have to

As with all list arguments, it does not matter where you pass it, so it's up to you to, for example, group all arguments pertaining to a specific image, or group all arguments of the same kind - or use some other scheme to your liking. Only the sequence matters, so the first facet_brightness argument pertains to the first facet argument, the second to the second, etc.


is precisely the same as


The argument can be used to the same effect as the 'Ev' or 'Exposure Value' in a PTO file, but the semantics are different: the Ev value describes how much light was allowed to reach the sensor, by using a specific aperture and exposure time. It's a logarithmic value; with each Ev step, the amount of light reaching the sensor doubles/halves. The facet_brightness argument, on the other hand, is used when viewing or combining the images, and it is set so that images taken with different Ev can be displayed together with 'the same brightness'. So when lux reads a PTO file, it analyzes the Ev values given for the individual facets and calculates facet_brightness values to brighten or darken the facets so that they come out equally bright on the monitor.

For panoramas, it's a good idea to shoot the set of images with constant Ev, but this also has drawbacks: if you take the images with a low-dynamic format (like JPEG), you're likely to have some images which have overexposed parts, and some which are too dark. If you allow the camera to auto-expose, these problems lessen, but then you have images with differing Ev, and to display such image sets adequately, you need facet_brightness. Even when setting the camera to use constant Ev, the images may come out with corresponding pixels showing different brightness, due to e.g. lens vignetting.

If your images vary in brightness and you don't have the information handy to set facet_brightness values, lux can calculate adequate values for you - just press 'Shift+L' to make lux do the 'light balance'. It will set the facet_brightness values accordingly until you open another image.

Keep in mind that differing brightness in the overlap can have different causes. One common source of such differences is light shining into the lens so that it makes it to the sensor by reflection or through optical imprefections of the lens, rather than following the intended 'ray path'. This is often a problem if you have a bright source of light in or near the content (i.e. the sun). There is nothing you can do about that with assigning facet_brightness values, it's simply a flaw in your take and you can try and mitigate, but not really undo it. And it's one possible reason why using lux' 'light balancing' may fail to get all images to perfectly equal brightness in the overlap. Another reason is that the camera response is rarely linear, and even if you have source images which are nominally linear RGB or sRGB, they do not faithfully represent the 'scene illuminant' but rather something which 'looks good', by the camera's standards: the faithful depiction of the scene illuminant does in fact usually look quite dull and boring, like an image taken in 'RAW' mode and converted without any 'spicing'.


This is a boolean argument. When set to true, it tells lux that the source image is to be cropped, either to a rectangular cropping area, or to an elliptic one (used e.g. for circular fisheyes). Which type of crop should be applied is encoded with the option facet_crop_elliptic. facet cropping is done with an alpha channel manipulation and implies --alpha=yes, which is set automatically if any of the facets come with facet cropping. The facet_crop... options are all 'list' arguments, with the usual rule that you need to pass the argument not at all, precisely once, or precisely once per facet. Currently, resulting alpha channel values will be either fully opaque or fully transparent. TODO: consider feathering the alpha channel


If set to a value greater than zero, the facet will be masked out with a feathered mask (it will be 'faded out'), where the fade-out occurs over a zone of about as many pixels as the value passed. The default is a hard mask, which is usually fine because the margins will rarely make it into the final image. But if the margins are visible, the staircase artifacts of a hard mask can be quite ugly, especially for elliptic masks - rectangular masks don't show staircase artifacts, because the edges of the mask are always straight horizontals or verticals. fade-out is twoards the inside of the cropped area, so any pixels outside the limits of the cropping area will be safely masked out.


To apply an elliptic cropping area, set facet_crop_elliptic to 'true'. If it's false (the default) cropping is to a rectangular cropping area. hugin-generated cropping info for circular fisheyes will have the extent so that the resulting ellipse has equal axes: a circle. When input is from a PTO file, facet_crop_elliptic will be set true if the source image's projection is 'circular fisheye' and false otherwise. If input is from a lux ini file, the default is to use a rectangular cropping, but the choice can be made either way for all types of projection, which implies that you must use facet_crop_elliptic=true explicitly for circular fisheye images, it won't be set automatically. lux extends the circular cropping to elliptic ones, because this doesn't actually cost any extra CPU load, and if there is more flexibility to be had, we're happy to take it. The elliptic cropping is limited to ellipses which are mirror symmetric to the horizontal/vertical.

facet_crop_x0, facet_crop_x1, facet_crop_y0, facet_crop_y1

The limits of the cropping area are given with these four arguments. They encode the maximal extent of the cropping area, which applies to both rectangular and elliptic cropping.


Yet another list argument, pertaining to ranked blending only. It sets a specific 'handicap' for a facet, which is a numeric value affecting it's ranking. The handicap is added to whatever 'rank' the ranking algorithm comes up with. rank zero is the 'best', so adding a handicap makes the facet rank worse. A typical use is to assign a handicap to all facets but one, which results in this facet being rendered on top of everything else and with the facet's complete content, because all other facets are now ranking lower. This technique is used internally to create partial images to feed to the modified B&A image splining algorithm. To get the effect, pass handicaps of greater than one to the facets which are meant to 'go to the background' - the normal ranking range is zero to one.

ranking is quite involved, and to get good results with small modifications of handicap values is difficult.


This option sets the horizontal field of view of a facet. It's another list option, if you pass it once it's used for all facets, and if you pass it more than once, you need to pass it once for every facet. Pass a real value, in degrees. This value is mandatory!


This is the per-facet equivalent of the single-image option 'is_linear'. As for single images, if the image's type indicates that it's either linear (like openEXR) or sRGB (like JPG), this is what lux uses per default, but for some image types (like TIFF) it can be either, which is where you'd use this option to 'clarify'. For images in one of the formats where the format indicates a specific colour space, this option overrides it, so you can even use JPEGs as linear input, which you sometimes need to do if other processing stages have produced such an image by accident. If you globally pass is_linear when displaying a facet map, the value will be taken over for all facets, and the same thing happens when passing facet_is_linear only once. To affect only specific facets, you must pass it for all facets individually.

If your view looks 'pale', this is often an indication that lux takes input as linear when it is in fact sRGB. On the other hand, if the view looks 'dark', the input may be linear, but erroneously interpreted as sRGB.

Currently, linear RGB and sRGB are the only two colour spaces lux knows - if you have images in other colour spaces, you'll have to use external tools to convert the images to either linear RGB or sRGB. The view is always displayed in sRGB, but snapshots can be made in linear RGB or sRGB.

facet_lca, facet_lcb, facet_lcc, facet_lch, facet_lcv, facet_lcs

These options encode the lens distortion of a facet. The values are the same as the lens correction parameters used in panotools. Here's the verbatim copy of the documentation from the README file:


  These options are used for lens correction parameters a, b and c,
  as they are used in panotools. Lens correction and vignetting
  correction are typically set when PTO files are processed and
  a 'live stitch' of real-world photographs is intended, they will
  rarely be useful with synthetic images.


  These options are used for lens correction parameters d and e,
  as they are used in panotools. I've chosen 'h' and 'v' for
  'horizontal' and 'vertical' to set them apart from an internal
  parameter 'd' also used for lens correction.


  This option sets the lens correction polynom's reference radius.
  The panotools-compatible value is 1.0 and makes the reference
  radius half the shorter edge of the image.


For non-mosaic projections, all facets have to be oriented in space, which is done by the image registration process. Lux does not do the registration, but relies on other software (like hugin) to provide it. When using registration tools from the panotools family, the registration information will be encoded in a PTO file which lux can read directly. But it's also possible to specify facet orientation in lux' own dialect - or to translate a PTO file using the helper script

In PTO, i-lines have three items 'y', 'p' and 'r' which encode yaw, pitch and roll in degrees - the so-called 'Euler angles'. lux uses the same values, but they are encoded by passing 'facet_yaw', 'facet_pitch', and 'facet_roll'. Again these are 'list' arguments, so you need to pass them once for all facets, or once for every facet. These arguments are optional, and default to zero.

The pitch angle is one of the three 'Euler angles' used to describe a facet's orientation in space. It encodes the rotation of the camera around a horizontal axis perpendicular to it's optical axis - in other words, by how much the camera is tilted up or down.


facet_priority is used to override 'standard' ranking of facets used with blending=ranked. Per default, 'facet_priority' is set to 'none', meaning 'no override', which results in ranking by distance-to-facet-center only. This ranking is quite complex - the effect of the distance from the facet's center is non-linear, with a 'shallow cone' in the center and a 'steep pyramid' near the edges. This scheme ensures that two facets will have an actual border, even if they differ in hfov, rather than one facet just 'winning' all the way to it's edge, so that there would be occlusion rather than a boundary. The boundary is essential for blending - if there is occlusion, we always get a discontinuity, which may or may not be visible. The ranking function - complex as it is - takes processing time, and at times it's preferable to use a different method to prioritize facets and switch the ranking off. This happens when facet_priority is set to a value other than 'none', and there are several modes of operation which can be used instead of the ranking-by-distance.

Using 'explicit' requires specific per-facet values of 'facet_handicap'. The priority of the facets is now by handicap only: the facet with the lowest handicap 'wins' at a given position. Note that the facet's handicap is used as well when the priority is established 'by distance' - then it's added to the rank derived from the distance. Because ranking by-distance produces values in the range of zero to one, setting numerically large handicaps will also effectively override by-distance ranking, but the by-distance ranking is still computed, albeit futilely. When 'explicit' facet_priority is used, this futile calculation is omitted, so the resulting rendering process is faster. This is true for all facet_priority settings apart from 'none'.

Using 'hfov' will produce handicaps correlating with the facet's hfov, so that facets with large hfov will receive large handicaps. This puts facets with small hfov in front of facets with larger hfov, an effect which may be desirable when adding a few tele shots of detail to some wide-angle background - doing it this way will show the entire extent of the small-hfov-facet, whereas the 'standard' way blends it in.

Finally, passing 'order' will add handicaps by facet number, so that facets with low numbers will occur in front of facets with high numbers.

An interesting effect occurs when facet_priority is set to 'hfov' or 'explicit' without any differences in the per-facet values of the priority criterion - meaning all have equal hfov, or handicap, respectively. If you pan over a panorama with such a configuration, you'll notice that the facet whose center is nearest the view's center will be prioritized. This is due to the inner workings of the facet processing algorithm: it may reorder the facets when it builds an interpolator encompassing several facets, to put the 'fittest' facet first, and this facet will 'win' over the other facets if no other criterion interferes. You can use the effect to always see 'all of the most prominent facet'. Just pass, e.g., facet_priority=explicit and no individual facet handicaps. Once your view puts another facet in the 'pole position' it will now win over all other facets contributing to your current view, and you may be able to observe the change in the 'pole position' which looks like the new 'winner' 'taking over' and covering the other facets.


This sets the projection of the facet image - yet again, as a list argument, you can pass it once for all facets, or you have to pass it once for each facet. This argument has no default, it's mandatory. All single-image projections are understood. There is a fundamental difference between 'mosaic' projection and other projections, and they can't be mixed: Either all facets are in 'mosaic' projection, or none is. If the facets are in mosaic projection, the target projection has to be mosaic as well, and vice versa.

When processing PTO files, the facets' projection is gleaned from the 'i-lines', but lux only understands a subset of the projections known in panotools, namely rectilinear, spherical, cylindric, stereographic and fisheye.

Setting up a facet map with facets in mosaic projection is a quick way to 'bundle' a set of partial images for an exposure fusion or for deghosting, if these partial images have already been created by software which was aware of their registration - so the set of single images produced from a PTO file with, e.g., nona, might be used here, by simply listing the facets and adding the (few) mandatory arguments, like so:


Lux can fuse such image sets, but note that it can't stitch this way: stitching in lux has to be done from a set of registered source images, and the geometrical transformation and image splining both have to be done by lux. Why so? Because lux can't figure out from the geometrically transformed images where the seams should be placed, because it's seam placement is based on the orientation of the participating images in it's model space. So you can use a small script which produces a lux ini file like the one above, and calls lux with it, as an alternative to enfuse, but you can't emulate enblend this way.


The roll angle is one of the three 'Euler angles' used to describe a facet's orientation in space. It encodes the rotation of the camera around it's optical axis. For a description of the use of the Euler angles in lux, please refer to facet_pitch


facet_squash defines per-facet, what 'squash' defines for single images. Please refer to the squash section for an explanation. This is an optional list argument: if you omit it, the facet will be processed in full resolution, if you pass it once, the value is taken for all facets, and if you pass it more than once you have to pass it once for every facet. Note that, for convenience, a global 'squash' argument will be applied to a facet map as if a single 'facet_squash' argument had been passed to affect all facets equally.

Individual facet_squash values may make sense if you mix facets with different resolution - for example, fisheye images to capture the scenery in 360X180 degrees, while 'interesting' parts are additionally captured with a longer lens. If you know that the rendition you intend does not require the full resolution of the 'detail' images, you can squash them, but leave the fisheye images unsquashed.

facet_vca, facet_vcb, facet_vcc, facet_vcd, facet_vcx, facet_vcy

These options set vignetting correction values. They are optional list arguments. The values are the same as the ones used by panotools - in an i-line, they are given by the items Va, Vb, Vc, Vd, Vx and Vy. Like the values facet_lc..., you will rarely specify these values in a lux file, but instead use a PTO file as input.


The yaw angle is one of the three 'Euler angles' used to describe a facet's orientation in space. It encodes the rotation of the camera around the vertical axis. For a description of the use of the Euler angles in lux, please refer to facet_pitch


This is a boolean option, and defaults to 'no'. Passing 'yes' instead tells lux to create a faux bracket, with the exposure values given by several faux_bracket_ev arguments, or with a combination of faux_bracket_size and faux_bracket_step, which also provides a default if you pass no faux_bracket_ev values. The 'faux' in faux bracketing means that the 'faux' exposure bracket is not a 'true' bracketed shot with separate individual exposures, but created artificially by making brightened and/or darkened versions of one original image. I'll refer to the original image as the 'mother' image. While the 'faux bracket' might be useful in itself, it's not made manifest as a set of individual, differently exposed images outside of the program, instead it's immediately fed into an exposure fusion. It's a short path to achieve what might be done by creating first the differently-exposed partial images and then producing an exposure fusion from them.

Faux bracketing is a type of tonemapping, and it can be used to compress the dynamic range of an image. Compared to other types of tonemapping, faux bracketing often produces more 'natural-looking' output, similar to the output you'd get if you had taken and exposure-fused an exposure bracket in the first place, but starting with a single image. Starting with a single image has a definite advantage over starting from an image set: you'll avoid all issues arising from differences in the content beyond the different exposure, like parallactic errors or movement in the scene. The best source for faux bracketing are images in an HDR format, like openEXR. Second best is images taken in RAW, and then converted to a format which can handle a wider dynamic range than, say, a JPEG - a good candidate would be a 16-bit TIFF, which most RAW converters can produce. Best are data in linear RGB, so if you can set your RAW converter to produce linear RGB output, go for it. Then you have a good starting point for further processing:

first, load the 16bit linear TIFF image, e.g. like this:

lux --faux_bracket=yes --faux_bracket_ev=-2 --faux_bracket_ev=0 \
    --is_linear=yes --snapshot_extension=tif --snapshot_tiff_linear=yes \

in lux, once the output looks as intended, press Shift+U

now you have output named IMG_1234.tiff.lux.1.fused.tif, in 16bit TIFF
and the compressed dynamic range.

The code above produces what lux considers a 'standard' faux bracket, using Ev values -2, 0 and 2. There's a 'compound argument' to the same effect, please see 'compress'. Please also refer to the faux_bracket_size and faux_bracket_step arguments for an alternative way of setting up the number and Ev values of the faux bracket exposures - in fact you would get the same effect as in the example above if you omit all the faux_bracket_ev values, because -2, 0, +2 are the default values resulting from the defaults (3 and 2.0) in faux_bracket_size and faux_bracket_step

The drawback with starting from a single image is often it's limited dynamic range; even if you're processing RAW images as described above, you'll often have noise in the dark areas, which can become prominent with faux-bracketing, and what's overexposed is simply lost and can't be recovered at all.

faux-bracketing images which have already been HDR-blended and stored in an HDR format like openEXR can make use of the entire dynamic range, compressing it into the 'normal' sRGB range. While lux can do HDR blending, you may prefer to use different software - especially software which can directly HDR-blend RAW images - and only use lux to compress the dynamic range with faux bracketing, which is simple and straightforward and can even be batched.

So the 'mother' image for the faux bracket is typically an HDR image, but it doesn't have to be: faux bracketing can be used for dynamic range compression of an arbitrary source image. It's only that most 'normal' images look okay without dynamic range compression, because all their brightness values 'fit' into the viewer's dynamic range, whereas HDR images typically have content which is too bright and/or too dark to be displayed adequately, and therefore profit more from dynamic range compression, which can make such content visible without looking 'unnatural'. Nevertheless, faux-bracketing 'normal' LDR images is a valid option to, for example, add fill light to dark areas, but the fill light may make noise more prominent, and overexposed pixels simply can't be recovered. It's definitely worth a try if the input has a wide range of brightness values.

Faux bracketing in lux is treated as an additional processing stage, and can be applied to single images, cubemaps, or facet maps. For single image input, the single image itself is the mother image. For synoptic displays, the synoptic process is used to create the mother image, which is then subjected to faux bracketing. So faux-bracketing a synoptic view is equivalent to first saving the synoptic view to an (HDR) image file and then reloading this image as the single mother image of a faux bracket. If the source is a facet map with hdr blending, the mother image will be the hdr-blended facet map (equivalent to first creating HDR output, e.g. an openEXR file, and then reloading). If the input is a facet map with ranked blending, the mother image will be a rendition done with image splining, so the images are first blended with the modified Burt & Adelson algorithm, and then the result is faux-bracketed.

Faux-bracketing facet maps with blending=hdr does not create an exposure fusion first, but it hdr-merges the images to create the mother image. The final exposure-fusion step which occurs when displaying exposure brackets on-screen is replaced by faux-bracketing the mother image. For facet maps with ranked blending (a.k.a. panoramas) the intermediate step of blending the images is executed to create the result which most users would expect - omitting it would process the 'live stitch', which often has visible seams. For quorate blending, the mother image is simply the deghosted result of processing the input, which is subsequently faux-bracketed. So, to reiterate, faux bracketing will first create a single mother image and then faux-bracket that.

The processing for faux bracketing is - obviously - computationally expensive: several versions of the mother image have to be created and then exposure-fused. This is too much processing for animated views, so faux brackets are only displayed on-screen when the viewer is at rest, and they can take a good while to show. During animated sequences, and while the faux-bracketed result is not ready, 'less involved' images are displayed, which keep you oriented but don't show the complete effect yet. The switch from showing such an image to showing the final result can be annoying, because the result of faux bracketing tends to look quite different from the mother image, but that can't be helped. Using faux bracketing for the view displayed on-screen is more to help you get the parametrization right before you produce a 'final' faux bracket from your input as a single output image. Think of it as a preview, which is made quite quickly, but can't be rendered at 'animation' frame rates. Once you're happy with the result, you can store it to an image file.

If you want to temporarily suppress the automatic display of the faux bracket view when the viewer is at rest, pass '--faux_bracket=no' in the override line - the Ev values will be remembered and used again if you pass '--faux_bracket=yes' subsequently.

The production of a faux bracket in lux is technically an exposure fusion - because of the last step, which is exposure-fusing the several versions of the mother image. Therefore, if you want to capture the result of faux bracketing to an image file, you need to specify that your output should be exposure-fused. When triggering output with the keyboard, use 'U' or 'Shift+U', as you would for other exposure fusions. When triggering the output with a command line option, you can use --next_after_fusion=yes, and the output will be stored with 'fused' infix. For a 'compact' option with three default Ev values, see compress


This is a list option used together with faux_bracket, and the values passed with faux_bracket_ev are taken as exposure values for the brightened/darkened versions of the faux bracket's 'mother' image.

To make sense, you need to pass this argument at least twice - to generate two different versions of the mother image for subsequent exposure fusion. The values you pass are in Ev units, an Ev step of 1.0 is equivalent to a factor of two in linear light. Using Ev values instead of a multiplicative factor was chosen here because this measure is common for exposure brackets in photography, where you can usually tell your camera to take an auto exposure bracket (AEB) with a given set of Ev values (or Ev differences from the 'middle' exposure).

A typical set of values would be -2, 0, and 2, which is also the set of Ev values used for the 'compound' argument 'compress', but you're free to choose any number of values and you're not limited to a specific pattern like the +/-X Ev which is often your only choice when making AEBs with a digital camera. What makes sense photographically is a different matter and depends on the input. When passing positive Ev values, expect the result to come out brighther. So passing, e.g., Ev 0 and Ev 2 will just produce a fill light and produce an image which looks brighter, overall. Passing, e.g. Ev 0 and Ev -2 will only pull down bright parts, and produce an image which looks darker, overall.


Sets the number of artificial exposures used for a faux bracket. This option is only processed if no faux_bracket_ev values have been passed. See the next option for documentation of the effect.


If you set faux_bracket but don't pass a set of Ev values with several faux_bracket_ev arguments, this option and the one above will be used to populate the vector with Ev values. The defaults (3 and 2.0) are set so that you get a vector with the three Ev values -2, 0 and +2 Ev, but you can pass other values. The vector will be set up so that the sum of all elements comes out as zero. Most of the time, such a symmetrical setup with equal Ev values is what you want, so passing explicit faux_bracket_ev values will rarely be needed.


This option is used for processing facet maps with 'ranked' blending - panoramas, for short. It's effect is on the 'live' view of a facet map - the view which lux displays during animated sequences and when the viewer is at rest, but before the 'stitched' view is ready (providing that snap_to_stitch is active). Pass a numerical value, the order of magnitude is pixels. For typical panoramas, a small two-digit number is often a good choice.

Without feathering, 'ranked' blending produces a 'hard' border between the facets, which may or may not be visible - if the images fit well spatially and photometrically, you may not be able to see the facet border at all. If the border is visible, feathering will 'cross-fade' between the facets, making the border less visible at the expense of potentially showing double images of high frequency content in the transition zone, while differing brightness will result in a gradient in the transition zone which will be noticeable if the transition zone is quite narrow. These two effects are both annoying, and if you widen the transition zone to make the gradient less prominent, you get more high frequency double images, making it hard to find a good compromise. These effects are precisely what triggered the development of 'image splining', which avoids both of them quite effectively. You can think of feathering as a 'quick fix' which may be sufficient for some input, and indeed it's so fast to compute that it's used for lux' live view, whereas a 'proper' stitch with the Burt & Adelson algorithm takes much longer. I recommend you refer to the original article "A Multiresolution Spline With Application to Image Mosaics" by Peter J. Burt and Edward H. Adelson, which is available online for example here:

The article explains the shortcomings of 'classic' blending methods and shows how a multiresolution approach can avoid them.

It may be surprising, but in lux, feathering also is relevant for image splining. The original Burt & Adelson algorithm does not use 'hard' masks, but employs a transition zone - one example uses a one-pixel transition zone where two masks overlap. Lux extends on this method by applying the feathering to the masks, so the feathering argument does have an influence on the stitch. Normally, for stitching, you don't use feathering - you only use it if you get visible 'hard' discontinuities at facet borders due to imperfectly fitting images. Try small single-digit values. When using feathering with image splining, you'll get doubled high-frequency content in the transition zone if the images don't fit well spatially, but this may be preferable to a visible boundary. Both flaws reflect suboptimal photographic technique, but at times you have to make do with what you have.

Note that the masks which lux generates are strictly 'by geometry' and not influenced by image content - there is no 'seam optimization'. If your input images don't fit well spatially, or registration is imperfect, you may be better off using other software for blending (e.g. hugin+enblend) which does seam optimization, and therefore creates masks which avoid putting seams where they would produce prominent discontinuities. Seam optimization can mask poor fits to an extent, it can't hide them in every case.


This is a 'compound' argument. You need a facet map as input, containing a set of images taken with different focus. The images will be blended into a 'focus stack', picking the 'sharpest' bit from each of the images. The classic approach for this technique is to calculate the weights by applying a small convolution kernel measuring local contrast. Lux uses the squared gradient magnitude of a b-spline over the images, which has the advantage of being continuously defined. For more about focus stacking, see focus stacks

Passing --focus_stack=yes sets up all arguments as needed and immediately produces a source-like snapshot. Technically, the result is an exposure fusion with exposure_weight set to zero and contrast_weight set to one. The output will have a 'fused' infix. After the output is saved to disk, lux will proceed to the next image or terminate if there are no more images.


This is an integer option and determines the maximal frame rate for animated sequences. Passing this argument will only have an effect when 'use_vsync' is off, and tells SFML (the library lux uses for driving the display and UI) to use a fixed frame rate limit instead of the usual frame rate control by synchronizing with vsync - a term from the old days of CRT monitors, which, in modern times, means to run at the frame rate the screen is set to accept - typically 50 or 60 fps.

You can pass any value, down to one, which shows one image per second. In my experience, this method of driving an animation is inferior to using vsync, but this may be due to my specific setup. It's an option if using vsync would be 'too fast' for your content. Lux has options to get rendering times down, but lowering the frame rate is also a way of dealing with limited resources. Of course the limit you set here is an upper limit, and if rendering is too slow, you can't reach the limit and, again, you'll face dropped frames and stutter.

If you pass --use_vsync=no and no frame rate limit, lux will create frames as fast as it can and immediately feed them to the GPU. Usually this is not a good idea - frame creation times vary and 'forcing' the frames on the GPU will not result in a smooth animation - you'll get stutter, and other problems like tearing, but you may want to test just how man fps lux can produce running as fast as it can. If you also want to omit passing the rendered frames to the GPU, use -n or --suppress_display=yes on top, which discards the frames after rendering them. Obviously, this option is only good for benchmarking.

I haven't yet had an opportunity to test lux on a system with a GPU/monitor combination which can accept varying frame rates (I'm referring here to techniques called 'FreeSync' by AMD and 'G-Sync' by Nvidia). I assume that for such systems using lux with a fixed frame rate may be the correct choice, because animations in lux rely on a fixed frame rate, rather than rendering the frames to coincide with a real-time clock, so simply having lux calculate frames as fast as it can and sending them off to be displayed will not result in a smooth animation, because rendering times may vary from frame to frame, which would make the animation seem to run with varying speed.


This is an integer option. It sets the maximally allowed number of frames in the frame queue. The rendering thread pushes frames to this queue as soon as it has rendered them, and it will render one frame after the other unless the limit is already reached, which makes it block until the main thread has consumed frames. By raising the limit, the rendering thread can 'work ahead', but with more frames in the queue it will take longer until any user interaction makes it 'all the way through', so latency increases. With a 'long queue' frame rendering times just under the theoretical maximum become possible, but at the cost of high latency. The default value (typically three or four) is heuristic.

If you have problems with dropped frames and can live with latency, increasing the frame queue limit is a good option - unless your average frame rendering time is too long to produce the 50 or 60 (or more - it depends on your system) fps for smooth animation. If that happens, add --auto_quality=yes to your parameter set.


This is a boolean option, and true by default (except for Mac builds, see below). It tells lux to start in full-screen mode, rather than in a window. This tends to be the best way to run lux - in my experience, running lux in a window produces more problems with stutter and dropped frames, and most of the time, when you want to look at images, you want to use your whole screen anyway - I certainly do, and one thing which annoys me with most image viewers is that I have to click my way to a full-screen display at the beginning of the session instead of getting it straight ahead.

If lux was started in full-screen mode, you can always switch to window mode (use F11 or the button labeled 'WINDOW' in the GUI) and back, and there are just these two modes, not an in-between mode where the window is extended to the max but still has a frame - another thing I find annoying, after all I can extend a window to maximal size without 'help' from inside the program.

On the Macs where I tested it, lux crashed immediately when started in full-screen mode. So the cmake build script now checks the 'APPLE' variable, and if it's set it builds with fullscreen=false as default. Switching to fullscreen mode later on with 'F11' seems to fail as well, probably because the key is not recognized correctly by SFML - but using the window's control to switch to full-screen mode seems to work okay.

TODO: I noticed that when switching to full-screen mode and immediately trying to access the GUI, the mouse pointer 'hits a wall' at the GUI's lower border and the GUI fails to show. If I wait a little while until the mouse pointer goes off and try again, all is well.


This is an 'exotic' option, pertaining to facet maps, and defaulting to 'no'. Passing 'yes' here indicates that the facet map in question has content to cover the full 360X180 degrees without any gaps. This allows lux to do certain calculations more efficiently, but the difference is not very large. If you pass 'yes' but there are gaps, lux may crash.


This is a 'compound' option, processing a facet map into an exposure fusion and saving the result to disk, then proceeding to the next image. Most of the time, you'll use a PTO file to define the facet map, and then the output will match the specifications given in the PTO's p-line. If you pass the facet map with a lux ini file, a source-like snapshot will be created, matching the specs of the first facet - or the one designated with snapshot_facet.


This option is on by default and results in the edge of the image being faded out to black (or, to transparent for views with alpha processing). The fade-out is just a couple of (source) pixels wide, and the effect is used to avoid staircase artifacts along the image borders, which can be quite annoying otherwise. The option only affects single-image displays - and synoptic views if alpha processing is activated. If your source images are very small, the size of the fade-out margin can appear disproportionally large, in which case you may be better off passing --grey_edge=no.

Without alpha processing, this option can't be used for synoptic views, because fading facets to black where they occlude other facets would spoil the view, the effect has to be rendered using transparency. Even if you pass --grey_edge=yes on the command line, you won't get an effect unless alpha processing is on.


With this option, you can set the font used by the GUI - this is the font used for button labels and for the short info texts you get when secondary-click-holding the buttons, and it's also used for the splash screen and the status line. The default is to use the font that comes with lux: Sansation_Regular.ttf. This is a free font created by Bernd Montag, available from

I include this font in lux bundles and it's also in the git repo - if you redistribute it, please make sure you follow it's license and also include the README file, currently Sansation_1.31_ReadMe.txt, also in the git repo and in lux bundles.

When using other fonts, your mileage will vary - I haven't made much of an effort to make the use of other fonts work smoothly, and some fonts don't offer all the glyphs used by lux. So when using different fonts, you may see labels which don't fit the buttons well, and missing characters. Apart from such issues, everything SFML can load as a font should work.

Lux does not rely on any fonts being installed on the system, because it aims at being platform-independent. Requiring a font installed on the system would require installing the font during installation of lux, which may or may not be done by the installing process. Lux is ignorant of such processes, but system-specific builds may introduce a notion of where to look for the default font - see CMakeLists.txt for more information on that.

Another way to set the font is via an environment variable: set LUX_GUI_FONT.

lux will refuse to work if it can't find the default font and you haven't specified a different one with this option.


This option sets the extent of the GUI (in pixel units) as it appears initially. The value refers to the whole GUI stripe with all buttons. In a running lux session, you can change this value by performing a Ctrl+mouse wheel scroll on the GUI area. If this value exceeds the viewing window's size, the right part is cut off, but you can use a mouse wheel scroll (now without Ctrl) to move the section which is displayed.

The default here is zero, which means 'unset' and it sets the size so that it should match one full screen's width. The actual determination of the size is slightly more complex and produces the value from looking at the desktop height, to avoid a 'giant' GUI stripe when the user uses a desktop extending over several screens - where lux silently assumes that the user has the several screens side-by-side and not on top of each other, which should be a rare exception.


This option sets a scaling factor for the GUI which determines the size of the texture the GUI is rendered to in relation to the desktop. (Note - the meaning is shifting with the new GUI, this is work in progress). Using values less than one will use a smaller texture - the labels will become harder to read and blurred - and using values more than one may make the labels 'crisper', but they also bloat the status line (TODO: change that). The default is zero, which tells lux to pick a 'good' value automatically. The new GUI sets this value once at program startup, and this argument can be used to provide a larger GUI, e.g. for tablets, where the normal size is too fiddly, or for laptops with very small screens. Note that passing 1.0 here is way too small for macs with retina displays. For normal desktop use, the automatics should be just fine.


When SFML creates the window's openGL context, it can create it with an 'SRGB-capable frame buffer'. Such a frame buffer takes texture data in linear RGB ('lRGB') and performs the lRGB to sRGB conversion on the GPU. This conversion is expensive, so the option looks tempting. But there is one large drawback: SFML can - AFAICT - only handle uchar data to create a texture, and these data have insufficient resolution for dark lRGB pixels, which results in banding. Whether this banding is noticeable or not depends on the content, but especially larger dark areas with mild gradients show the banding to an extent which is annoying. Hence this option is off by default. You can try and switch it on if you have frame rate issues - just check and see how your content comes out - if you're simply watching 'ordinary' images taken in broad daylight, chances are you'll not even notice the banding, and it may well shave a milliscond or so off the frame rendering times.

TODO: I am tempted to investigate and create textures from e.g. ushort data directly with openGL and then either find a way to make SFML 'adopt' them or convert them to uchar textures. While this would solve the banding issue, another one arises: even though lux' pixel pipeline produces SIMD vectors of single precision float data, these data are never committed to memory: the last stage of the pipeline optionally does the lRGB to sRGB conversion, converts the floats to uchar and interleaves them. This is what's saved to memory and passed on to create the textures, and it's a compact representation - storing to ushort would cost twice the memory (and the traffic associated with passing it to openGL), and storing to float would require four times as much memory (traffic). Plus, when storing to float, I'd have to scale down because lux uses values in [0.0-255.0] whereas openGL uses [0.0-1.0]. All of these factors may use up so much extra time that the gains which result from having the lRGB to sRGB conversion done on the GPU may be outweighed - which will in turn depend on the system lux is running on.


This is a compound argument or 'action', telling lux to create an HDR-merged output from a facet map. This option sets the output format to openEXR. The image will be as specified in the p-line. If you use a lux ini file as input, the output will be like the first facet - or like the facet you designate with --snapshot_facet=...

This compound option produces very similar results to hdr_merge (see below) but uses a different process: here, the rendering is in fact an exposure fusion, but with hdr_spread set to 1.0. So the images are combined with pyramid blending, but there is no dynamic range compression, and the resulting image is a 'proper' HDR image, capturing the entire dynamic range of the set of input images in the output. I find it hard to express a preference for either hdr_merge or hdr_fuse - it seems to depend on the content which works better. But it's definitely nice to have a second way of HDR-merging image stacks readily at hand with a compound option. Of course the same effect can be had by using --fuse=yes and --hdr_spread=1.


Note: lux 1.0.9 is buggy with this option (it's fixed in master). As a workaround, add --snapshot_extension=exr to the command line.

This is a compound argument or 'action', telling lux to create an HDR-merged output from a facet map. This option sets the output format to openEXR. The image will be as specified in the p-line. If you use a lux ini file as input, the output will be like the first facet - or like the facet you designate with --snapshot_facet=...

HDR-merging in lux uses a reasonably complex merging algorithm, which weighs well-exposedness with a gaussian curve. It suppresses over-exposed pixels automatically. But it's not configurable - switch to more sophisticated HDR merging software if lux' automatics don't work for you.

Using this compound argument, you can process many brackets in one go. A good way to do that is to move brackets to separate folders, then loop through the folders and produce folder-local PTO files for the registration - here, I'd recommend using align_image_stack, which does a very good job at aligning the images for HDR merging and can produce PTO output. After that, you can simply do

lux --hdr_merge=yes */*.pto

To get openEXR output. If you want exposure-fused output instead, it's

lux --fuse=yes */*.pto


This option defines the dynamic range of exposure fusions. Pass a value between zero (the default) and one, the maximum. Zero means 'produce a 'normal' exposure fusion with the 'standard' dynamic range of 0-255 for intensity values, and this was previously the only setting. Now lux has code to produce HDR-merged images with the help of multilevel blending (aka pyramid blending or image splining), and it turns out that the difference between the two can be captured in a single factor, which, for HDR output, is the other extreme of hdr_spread, namely 1.0. But it's also possible to opt for a compromise, namely to allow a dynamic range exceeding the 'normal' one but not covering the whole range offered by the participating facets.

To see the effect, run any job producing an exposure fusion using exposure_weight (jobs with contrast_weight only won't be affected) and pass an hdr_spread value between zero and one. Passing zero will make the result look like a normal exposure fusion, passing one will produce HDR-merged output, and other values will produce 'something in between' - an HDR image, but with it's range more or less compressed. One use case would be creation of content for displays with HDR capability, where hdr_spread might be chosen so that the output range meets the intended display's.

Also refer to hdr_fuse, which is a new compound option to create exposure fusions with hdr_spread=1, output as openEXR, in one go.


This lists all arguments lux knows, with their type and default, like so:

$ lux --help
detected path: <>
initializing user interface
processing command line arguments
argument: help value: ''

all options can be passed in 'long form' with two
'-' characters before the option. Some options also
have a short version, please refer to the documentation.

long options can be passed 'ini file style' with the
option and it's value separated by '=' or ':' and no
white space, or separated by white space.
so pv --hfov=90 is the same as pv --hfov 90
white space after the '=' or ':' is interpreted as if
an empty string had been passed as the option's value.
Some options can be passed several times, they are marked
with 'adds value to a list'. this is mainly used for
facet maps, to pass one value per facet, but also for
multiple 'image' arguments.

valid long options are (default given in round brackets):

--allow_pan_mode=<yes|no>  (yes)
--alpha={no|as-file|auto|yes}  (auto)
--auto_position=<yes|no>  (yes)
--auto_quality=<yes|no>  (no)
--autopan=<real number>  (0)
--blending={auto|ranked|hdr|quorate}  (auto)
--bls_i_spline_degree=<whole number>  (1)


After emitting the list, lux terminates.


This option takes an angle in degrees. It sets the horizontal field of view for the source image. 'Ordinary' image files usually don't have metadata providing this value, but 'ordinary' image files don't expect to be used in a panorama viewer context either, so they don't have projection metadata. If no projection is specified, you may omit the hfov argument, and the image will be treated as a 'flat' or 'mosaic' image, rendered in planar geometry only. All other single-image projections require hfov to be set, because lux can't guess it. Some images have suitable metadata: all images which lux generates have them (and lux-generated openEXR files have a sidecar lux ini file with the metadata, because openEXR itself can't hold metadata) - and images made with hugin also have usable metadata - full sphericals with GPano metadata have the complete set, other hugin-generated images have rudimentary information in the UserComment Exif tag, which work to an extent, but lack sufficient cropping information. If suitable metadata are present, you can omit the projection and hfov arguments, and lux will use what it finds in the metadata. Keep an eye on lux' command line output to see what it gleans, lux is quite verbose and will tell you what metadata it finds. You can also see lux metadata with exiftool, here's an example of output made with lux, the lux metadata start with 'Lux version':

exiftool lux_made.jpg
ExifTool Version Number         : 11.88
File Name                       : lux_made.jpg
Directory                       : .
File Size                       : 738 kB
File Modification Date/Time     : 2021:06:27 11:12:06+02:00
File Access Date/Time           : 2021:06:27 11:12:06+02:00
File Inode Change Date/Time     : 2021:06:27 11:12:06+02:00
File Permissions                : rw-rw-r--
File Type                       : JPEG
File Type Extension             : jpg
MIME Type                       : image/jpeg
JFIF Version                    : 1.01
Resolution Unit                 : None
X Resolution                    : 1
Y Resolution                    : 1
Exif Byte Order                 : Little-endian (Intel, II)
User Comment                    : .Projection: Rectilinear (0).FOV: 70.000000 x 42.995661
XMP Toolkit                     : XMP Core 4.4.0-Exiv2
Lux version                     : 1.0.9
Cropping active                 : False
Uncropped hfov                  : 70
Uncropped vfov                  : 42.9957
Projection                      : RECTILINEAR
Uncropped width                 : 1920
Uncropped height                : 1080
Image Width                     : 1920
Image Height                    : 1080
Encoding Process                : Baseline DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:4:4 (1 1)
Image Size                      : 1920x1080
Megapixels                      : 2.1

If you load a file like this, lux shows you a perspective-corrected view rather than simply a flat image: it 'understands' that the image is a rectilinear projection with a given field of view, and so it can produce a corrected view depending on where you direct lux' virtual camera. This can be surprising if you expect to look at a 'flat image' and use, e.g., the cursor keys to scroll or pan: the image won't simply scroll or pan, but instead you get movement of a virtual camera. If you want to 'force' an image to be treated as flat, use 'mosaic' projection - the fastest way is to simply enter -pm in the 'override line' of the GUI and commit with 'Enter' - or to invoke lux with -pm in the first place.


While hfov tells lux the horizontal field of view of the source image, hfov_view tells it the field of view of the viewing window, or, to express it differently, the horizontal field of view of lux' virtual camera. This is the value you can modify, later on in the session, by zooming in or out. In contrast to hfov, hfov_view can be set automatically - it's not intrinsic to the image, so any angle will do, and the default choice is not fixed but may vary with the input's properties. If you pass hfov_view, though, lux will start with the value you pass, and not use it's automatics to find a 'good' starting point.

When lux produces image files - like snapshots, exposure fusions, or panoramas - the content of the images is just what you see on-screen (unless you're doing 'source-like snapshots'). So the normal mode of producing such output is to get the view just right, changing target projection, image size, zoom factor etc, then doing the snapshot. At times, though, you may want to produce an image with precisely defined metrics. You can do that by starting lux in window mode and passing the window extent and the view's hfov. If you do a snapshot from such a view, it will reflect these settings. Keep in mind, though, that the widow size mustn't be too large, because your window manager may not allow creation of windows above a certain size. And also remember snapshot_magnification, which makes lux produce an image with higher or lower resolution than the on-screen display. Here's an example:

lux --fullscreen=no --window_width=1600 --window_height=900 \
    --hfov_view=50 --snapshot_magnification=2 image.jpg

This will create a 3200X1800 snapshot of a 50 degree section of image.jpg


This argument is used to position cropped images. The typical way one would expect to handle cropping is to pass the size of a cropping rectangle and the size of the 'whole' image, and lux does that when cropped images are supplied with lux metadata. To handle cropped images with command line parameters, lux does instead use angles, which is more compact: to describe cropping with size parameters, you need six: width and height of the uncropped image and the cropped area, and x and y offset of the cropped area. The angles lux accepts with horizontal_offset and vertical_offset reduce the parametrization to two values. These angles are the angle from the 'back pole' (the spot 'behind you') to the left image margin and the top image margin, respectively - the latter is fixed with vertical_offset.

The default is to assume that images are both horizontally and vertically centered, which is expressed by passing -1.0 to the ..._offset arguments or by omitting them. Then, calculating the offset angles is easy: it's simply half of what's left after subtracting the field of view from 360.

Note that you'll have to pass projection and hfov as well if you want to use this option: lux won't use metadata values in this case, to avoid confusion arising from mixing two different sources of information. If you omit the projection and hfov, the image will be treated as 'mosaic' and the view will be wrong - or entirely black. TODO: maybe reject this constellation


In a 'normal' lux invocation, you pass image files - or files 'standing for' images (like lux ini files for facet maps and cubemaps, PTO files) - as trailing arguments on the command line. But you can also pass them anywhere in the command line (not just at the end) if you pass them as --image=XXX arguments. And, like all command line arguments, you can use this syntax in lux ini files, just omit the two minus signs. Whichever way you use, the images are queued and you can move to the next one with 'Tab' and back with 'Shift+Tab'.

So if you have a.jpg, b.lux and c.pto, you have several routes to the same effect:

lux a.jpg b.lux c.pto

lux --image=a.jpg --image=b.lux --image=c.pto

Or, you write a lux ini file like this (save it as collection.lux):


Which you can invoke like this:

lux -c collection.lux

the '-c' is important: if you omit it, lux takes 'collection.lux' a file 'standing for' an image, and only shows the first image it finds in this file. The -c makes the difference: it makes lux read all assignments in the ini file as if they had occurred on the command line, with the result that all the image=... statements result in the images being queued, as desired.

I usually set up slideshows with such a 'collection' file, because it's a good way to add a few things like switching the status line off for the presentation and setting the slide show interval:


While we're at it, there is a third way of enqueueing images with lux, it's called 'stream mode'. If you end your lux invocation with a single '-', lux listens on it's standard input for the filenames of images to enqueue. So the above example could also be invoked like this:

echo -e 'a.jpg\nb.lux\nc.pto' | lux -

And to show all JPEGs in the current folder, you might use

ls *.jpg | lux -

This may seem redundant, but there are situations when the content you want to show isn't available at the time you invoke lux - you might have a script produce it in the background, or load it from a remote site. If your content-setup-process echos each image file name as it becomes available and pipes this output to lux, the files will be available to the lux session as their names are passed through the pipe. And you can easily do complex stuff without intermediate files:

find . -name '*.jpg' -print | lux -d2 -

will recursively search the directory tree from the present working directory, enqueue all JPEG files and display them in a slide show with two seconds per image. This is a nice trick to scan, e.g., music or e-book folders for cover images. The only drawback is that if you try tabbing to the next image and the pipe hasn't provided it yet, you are stuck with the last image lux received from the pipe until the next image arrives. You have to know that lux would terminate once the pipe is closed to interpret being stuck.


This option passed a lux ini file to the invocation. There is also a short form of this argument, -c

Lux ini files can occur in two syntactic slots: they can stand 'for an image', in which case they are enqueued and processed like other image files. Such ini files are passed as trailing arguments or with the --image=... option.

Lux ini files passed with --ini_file or -c are not enqueued, but processed immediately, right when the option is encountered during parsing of the command line. It's as if their content had occured on the command line (with double minus signs prepended to their key, value pairs). The 'c' is a hint as to what these lux ini files do; it's short for 'configuration'. Such ini files are merely a convenient way to put a common handle on a set of options. Typical examples are parameter sets used for slideshows, or parameter sets producing a specific status line used to analyze and compare image sets.

You can pass this option several times, the effect is cumulative, and if an option occurs several times, the last assignment 'wins'. Lux ini files passed with --ini_file=... can contain ini_file=... statements, which are treated in the same way, so the option is recursive.


This option takes a real value in degrees, and sets the initial pitch angle of lux' virtual camera. By default, this value is set up depending on the input, and attempts to find a 'good' value for it (see auto_position). But sometimes you want a specific starting point, and with initial_pitch you can set it. Passing zero will land you in the vertical center of the image.

Passing a positive angle will point the camera further up, and a negative value will point it further down. Note that the value is accepted unconditionally, and if there is no content where you point the camera, you'll see black.

If you've started viewing an image with initial_pitch set, pressing return will reset the view to this initial pitch angle.

initial_roll, initial_yaw

These options work just like initial_pitch above. Positive initial_yaw moves the camera to the right, negative initial_yaw moves it to the left. positive initial_roll turns the virtual camera clockwise, negative initial_roll turns it counterclockwise. Note that this may feel counterintuitive: rolling the virtual camera clockwise results in an image which seems rotated counterclockwise.

initial_dr, initail_dx, initial_dy

When displaying 'flat' images with 'mosaic' projection, the notion of pitch and yaw does not work: the camera is always oriented directly towards the image. But it can be moved up and down, and left and right - and it can be turned - 'rolled' - around the optical axis. Per default, lux will 'land' you in the center of a flat image, but you can change the starting position by passing one or several of these options. initial_dr sets the roll of the camera, just like initial_roll does for non-mosaic projection. initial_dx and initial_dy use an 'artificial' value: 1.0 is taken to mean 'the image's extent in that direction'. The value used internally is much smaller, you can see it as the 'extent' values echoed to the console, and the start value is simply scaled accordingly. Positive values move the virtual camera to the right, or down, respectively, so using 0.5 for both will put the image's lower right corner into the center of the view. This scaling is new after lux 1.0.9a; up to 1.0.9a the unscaled value was expected and you had to pass much smaller values.


This is more of an internal option, it's set to 'true' when lux opens a PTO file standing for an image. PTO format describes the metrics of image sets in subtly different ways to lux, and this parameter tells lux to do things 'the PTO way'. One example is the handling of images with rotation tags and whether to interpret the horizontal field of view as pertaining to the image 'as stored' or 'as seen', so before or after the rotation was applied.

You may see this option in lux ini files which were created by


This option tells lux that the input is in linear RGB. Lux can handle two types of input: images in sRGB and linear RGB. For some image formats, the type is predetermined: openEXR, for example, always holds linear RGB data. For JPG and PNG files, lux assumes that they contain sRGB data, and this flag is ignored. For other files, lux inspects the is_linear option, which is set to 'false' by default. So if you have a TIFF file holding linear RGB, you must pass '--is_linear=yes' to get a correct display.

This is a bit simplistic - it would be desirable to have proper colour space management, but I reserve this for some future release. For now, you'll have to live with the two options sRGB and linear RGB. Linear RGB, presented in a format with large dynamic range (like openEXR), can handle every type of content, so this is what you will want to use for content exceeding the 'normal' range. To feed such content to lux, you'll have to rely on other tools to convert the image to linear RGB first, like this:

convert input.tif -colorspace RGB output.exr
lux output.exr

Don't confuse this option with 'process_linear', which tells lux how to process image data internally.


This option can be used to force lux to use a different ISA (instruction set architecture). If you pass 'auto' here (which is the default), lux will try and figure out which ISA is best for the machine it runs on. As of this writing, lux can use four different ISAs. The first one, 'fallback', is some level of SSE which the compiler assumes to be a safe minimum when no additional compiler flags are given which would specify an ISA. This should be supported by all CPUs in circulation - if the processor doesn't even offer SSE, you probably won't want to try and run lux on it anyway.

One step up is AVX, which is quite rare, because it was replaced with AVX2 soon after, but for the processors which have AVX only, using it in favour of the default SSE is a good step up performance-wise. To use the AVX ISA, pass '--isa=avx'. AVX2 should now be the most common variety, while AVX512 isn't yet very widely distributed. To use AVX2, pass '--isa=avx2'.

When it comes to AVX512, lux code is compiled with -mavx512f, which is only one of several possible AVX512 compiler flags. Lacking such hardware, I haven't been able to establish how it performs - I would expect a fair performance increase due to the doubled register width, but how well this translates to lux' rendering speed I cannot tell. If you have a machine with AVX512 units, I'd like to hear from you! To use the AVX512f ISA, pass '--isa=avx512f'.

Whether you use the default or pass a specific ISA with this option, lux will echo the ISA it uses to the console. You can choose an ISA which your processor does not support, but then lux will crash with an 'illegal instruction'.


Some (especially older) cameras provide image geometry information which does not work with lux, because lux can't infer the correct lens crop factor. This leads to wrong hfov and vfov data for rectilinear images - at first sight the view looks okay, but if you move the virtual camera, the wrong fov values become apparent. For such images, you can 'manually' provide the projection and fov, but this is cumbersome, because if the camera can zoom, you have to pass a different fov for each zoom level. If you pass lens_crop_factor, lux can figure out the fov from the focal length, which is usually present. So you can view a whole batch of images from such a camera without much ado - all you need is the crop factor. You'll either find it in the camera's documentation, or you may find the sensor size instead - in that case, just divide 35.0 by the sensor width in mm, and you get the crop factor.


this is an 'exotic' option, which is rarely used. lux uses image pyramids, and normally, the display is calculated from a pyramid level which is similar in resolution to the on-screen view. To get the correct level, lux first calculates a scaling factor, then takes the base-two logarithm and then chooses an integer nearby. Level bias is added to the base-two logarithm before picking the near-by integer. If you pass a positive value, the result is that lux will switch to a higher pyramid level (and lower resolution) earlier when you zoom out, and the image may look blurred if the bias is large enough. If you pass a negative value, it's just the other was round: lux will use a lower pyramid level than it 'should', and the image may look 'crisper', but you'll get aliasing when you move the view around.

Rendering data from a higher pyramid level is usually faster than rendering from a lower level, because the data are closer to each other in memory: higher pyramid levels are altogether smaller than lower ones. So this is a way to squash rendering times at the expense of resolution. Don't overdo it - your display would look blurred. But you can give it a shot if animated sequences stutter, and in some circumstances this may be preferable to the use of 'moving image scaling' which modifies the size of the rendered frames, to a similar effect. You can even combine both methods.


This is an option for synoptic displays - 'facet maps' of several images. These images usually overlap, and oftentimes their brightness values don't match well in the overlapping areas. Light balancing looks at all overlaps between images and tries to modify the individual images' brightness so that the differences between images are lowered to a minimum. If your images are in linear RGB, this may be so effective as to make the seams invisible even in the 'live' view, but with sRGB data there is usually some residual difference which is visible - most images from digital cameras are made to 'look good' with manipulations beyond simple conversion of the data from linear RGB to sRGB; lux converts these data to linear RGB but the converted result is not necessarily true to the scene illuminant, which would be ideal. Recovering the scene illuminant is much more complex than simply converting sRGB data to linear RGB, and more sophisticated processing is needed. Panotools employs EMOR (Experiential Model Of camera Response), which lux doesn't understand (yet), this is why photometric modifications done with Panotools are often better than what lux can provide. lux will produce the best results when working on linear RGB data, and EMOR data in PTO files are simply ignored.

Be that as it may, using lux' automatic light balancing is usually a step up, so it's well worth giving it a try. In a lux session, you can always trigger the light balance by pressing 'Shift+L', but by passing '--light_balance=auto' on the command line, you can tell lux to do the light balancing right when the images are loaded, which is particularly useful for batch processing. Note that the result of light balancing is only remembered for and applied to the current image; once you tab to the next image, it's lost.

This option's default is 'by_ev' which uses the source images' Ev values in the PTO file to brighten/darken them to a balanced state. If the PTO was 'photometrically optimized' or if the camera's Ev values were correct and taken over to the PTO, this works quite well, especially if the images are in linear RGB.

I recently added a third option, 'hedged'. This option simply takes the images 'as they are' and overrides any Ev values, setting them to 1.0. Why would that make sense? At times you do a take with automatic exposure, so the Ev values of the source images will all differ, but the images themselves will have correct exposure. Darkening/brightening them according to Ev would be correct when trying to recover the scene illuminant, but brightening images with bright content will 'push some pixels over the edge' into over-exposedness if the target image has limited dynamic range, and it may brighten dark pixels so much that noise and quantization errors become visible. Using 'hedged lighting' avoids that, at the expense of overall uniform lighting, which is most apparent in the sky. Your mileage with hedged lighting will vary, but it's worth giving it a try before resorting to 'more extreme' measures to force your content into a limited dynamic range - what I refer to here is rendering to HDR and then 'compressing' the image as a faux bracket. So doing a stitch with "--stitch=yes --light_balance=hedged" and "--compress=yes" will both produce an image fitting into the sRGB dynamic range. hedged lighting is certainly faster than faux bracketing.


This is a synonym for ini_file


The 'magnifying glass' in lux actually covers the whole view, not just some cut-out area. It shows a display which uses the same interpolator as the unmagnified view, but a much denser coordinate grid. So the view you get with the magnifying glass differs from what you get by zooming with the same factor. The magnifying glass is to inspect the workings of the interpolation, whereas normal zooming tries to provide the best possible rendition at a given scale.

The default for the magnifying glass is to use a coordinate grid with a tenth of the mesh size, which is achieved by a magnifying_glass_factor value of 10.0. You can pass arbitrary real-valued factors.

metadata_query, metadata_format

These two arguments can be used to get a display of image metadata in the status line. metadata_query is a 'list' argument, metadata_format is a single format string.

--metadata_query=... Adds a metadata query key to the list of queried metadata for the status line. This is a vector field, so you can pass this option several times, but currently the number of queried metadata keys is limited to ten (0-9). An example:

Pass --metadata_query=Exif.Photo.DateTimeOriginal to obtain the original date and time. If you use --metadata_format="%n %0" at the same time, you'll get the filename and date/time in the status line. For now, queries are for Exif tags only, and only for those understood by libexiv2, as listed in The conversion to the displayed string is left to libexiv2's toString() function and can't be influenced via lux. If the specified key has no value assigned to it, it will be displayed as '---'.

--metadata_format=... sets a format string for the metadata display in the status line. This is new and still experimental. Pass a format string where %n will be replaced by the current filename, and %0 to %9 will be replaced by the value gleaned from querying they corresponding metadata_query entry (see above). %h yields the image's hfov, %p it's projection and %P the viewer's target projection. If you omit the format string, metadata specified with '--metadata_query=...' will still be displayed: the value will be prefixed by the key and a colon, and all specified keys will be displayed in numerical order.

TODO: currently, if you have no 'metadata_query' argument, the status line won't show, even if it only has format arguments like %n which are known without a query. To work around this problem, pass an empty query, like --metadata_query=""


This value affects animated sequences - moving images - like pans or zooms. Animated sequences are computationally intensive, because lux calculates each frame 'from scratch'. Depending on the current pixel pipeline, this may exceed the host system's capacity, and the result is dropped frames, visible as 'stutter'. Just how much processing power is needed depends on many factors, but there is one common handle to lower processor load: the size of the frames. If you calculate small frames which show the same content, this takes - roughly - proportionally less time. To make them look roughly as the 'correctly sized' frames, you pass them to the GPU with the proviso that they should be enlarged to the desired size. This magnification is done entirely - and very efficiently - by the GPU, so you have more CPU cycles to deal with the demanding animation.

moving_image_scaling is a factor which is applied to frame size to get this effect. To lower computational load, you use a factor less than one. If, on the other hand, you have processing power aplenty and want to get animation quality up, you can pass values above one: this will result in the rendering of oversized frames which are scaled down by the GPU, just as it can be done for still images, which may benefit image quality (if you don't overdo it). And, of course, you can use this factor to simply raise computational load and burn CPU cycles ;)

Different GPUs may produce different results when upscaling frames. At times, the result may be inferior to what you'd get, say, from an upscaler built into your display hardware. If you routinely use moving_image_scaling with a fixed factor (like .75 to render to 720p while your display is fullHD) you may be better off with switching your computer to 720p and relying on the monitor's upscaler if it has one - this upscaler may do a better job at displaying edges; the simple upscaling lux does with the GPU may result in visible blur.

The factor you pass here can be any real-valued number, but if you use successively smaller values, you'll soon notice the diminishing quality. Still, if you're working on slow hardware or have very demanding views to render, having blurred, but smooth animations may be preferable to stuttering sharp ones. The still images are unaffected. The change from the moving image to the still image can be quite noticeable with small moving_image_scaling factors, which can also be annoying.

moving_image_scaling can be modified in real time - the value you pass on the command line is merely the starting point. Use the GUI's animation quality buttons or press M/Shift+M to change the value, or use automatic animation quality which tries to adapt the value to avoid stutter.

Using very small values can produce an interesting 'pixelated' effect. Using values above one is rarely done and seldom has a valuable visible effect.

While this value affects animations, still images can use the same effect with still_image_scaling, please see there. The two factors are separate because the needs for still and moving images are opposite: moving images are usually optimized for fluid animation and use factors of less than one, still images may benefit from supersampling and factors above one. Setting still_image_scaling to the same value as moving_image_scaling has the advantage that the visible change in resolution from still to moving images is avoided.

next_after_snapshot, next_after_fusion, next_after_stitch

The 'next_after' family of options is used for batch processing. If an image is displayed with such an option active, lux will create a snapshot, exposure fusion or stitch and proceed to the next image - if any. The precise nature of the output depends on other parameters - this is the 'snapshot...' family of options which fix name, extent etc.

In the simplest case, with no other options from the snapshot family, passing --next_after_snapshot=yes will create a snapshot of the current view to an image file with a name derived from the current source image. The two other options from this family only make sense for facet maps, and they are silently ignored for non-facet-maps (post 1.0.9a). If they are applied to facet maps, they will trigger the code path indicated by the argument, so even if your view shows an exposure fusion of an exposure bracket, if you pass --next_after_stitch=yes it will create a stitch of the images, which may not be the expected behaviour.

Perhaps the easiest way to explain what these options do is in terms of a 'normal' lux session: they produce the same behaviour as pressing 'E', 'P' or 'U', respectively, and then 'Tab'. If snapshot_like_source is set to true, it's like pressing 'Shift+E', 'Shift+P' or 'Shift+U' and then 'Tab'.

Note that if you pass this option on the command line, it will be used for all images you pass to lux, so it can be used to process many images in the same way.

In a way these options show up that lux has no notion of time: all options which are active at a given point in time take effect, which may in turn lead to another set of active options. So the 'next after' notion encodes that, when active, the current view will be used to produce output and the next view will come on after that.

TODO: rethink these options: they go with the technicality of the process, not with the assumed intent. A single snapshot option which emits what the best-quality still image shows may be easier to grasp.


output_magnification is a real-valued magnification factor applied to all image output done by lux, no matter if it's a snapshot, a source-like snapshot, an exposure fusion, or a stitch. snapshot_magnification does not affect 'source-like snapshots': it's meant as a factor relating to the on-screen display, and the idea is that you settle on a factor which you maintain throughout your session. output_magnification is meant more for single image output - like stitching jobs - and it allows you to, e.g., scale the output size specifications in a PTO file's p-line with a factor, resulting in proportionally larger or smaller output. This is helpful to quickly render small renditions of the output to check that the output is as desired, before committing to the full scale, which may take long to compute if the output is large.

If snapshot_magnification is set, output_magnification is still applied, so both factors are honoured for 'ordinary' snapshots, stitches, fusions etc., whereas 'source-like' snapshots, stitches. fusions etc. are not affected by snapshot_magnification, but only by output_magnification.

If the output is cropped (due to a crop specification in a PTO file), the cropping is scaled proportionally, but the result of the scaling is rounded to integer values, so proportionality may not be 100% perfect due to roundoff.


OpenImageIO uses a plugin system for image reading, and the plugins take many different parameters which depend on the particular plugin. It's outside of lux' scope to deal with this multitude, but if you want to pass specific parameters to an OIIO plugin, you can use oiio_arg. This is a list parameter taking strings of the form "key=value" which you repeat for every parameter you need to pass, e.g. like this:

lux --oiio_arg=raw:half_size=1 --oiio_arg=raw:user_flip=-1 IMG.CR2

This example tells lux to pass two parameters to OIIO's raw plugin, namely to work with half-sized images and to autorotate the images. The latter isn't usually needed - except when you are processing PTO files which were made with dcraw auto-rotated images, which come out wrong with the default processing in lux.

Note the two equal signs used in each argument: the first one assigns the "key=value" expression to the next free slot in the oiio_arg list, the second is part of the "key=value" expression. You can find arguments to pass in the OIIO documentation. All arguments which can be passed to a 'config' object can be passed like this - lux does not check them in any way. lux handles them internally as strings, and OIIO converts them to the type it expects.

Note also that you don't have to use the command line to pass such arguments: you can also pass them via the 'extra arguments' field in the 'General Settings' panel. There, you can even select whether you want them active just for the currently viewed content or for the entire lux session.

Note that lux sets some parameters by default:

config [ "oiio:UnassociatedAlpha" ] = 1 ;
config [ "raw:auto_bright" ] = 1 ;

If you want to pass, e,g, a brightness compensation, you need to pass auto_bright=0 first, or it won't have an effect.

Some arguments take more than one value. For such arguments, OIIO requires type information, and lux offers syntax for the purpose: you can pass an OIIO typestring suffixed to the key, separated by an '@' sign, like "--oiio_arg=key@typestr=val val ..." note the quotes: the values have to be separated by space or tab, so the entire argument is quoted to 'hold it together'. Find out about OIIO typestrings in the OIIO documentation; For simple lists it's as easy as, e.g. float[2] in this example:

lux "--oiio_arg=raw:aber@float[2]=1.0004 1.0004" IMG.CR2

This is what I use for images done with my Samyang fisheye, to get libraw chromatic aberration compensation (note how the values are the reciprocal of what dcraw takes!)

Note that there mustn't be any spaces in the argument, except for the spaces (or tabs) needed to separate the multiple values. With this bit of artistry, it should be possible to exploit the entire range of OIIO input plugin's special capabilities, making use of OIIO infrastructure code and very little coding effort on the lux side.


This sets the path where lux will look for images. If you have images in some folder my_images, you can call lux like this:

lux --path=/path/to/my_images/ img1.jpg img2.jpg img3.jpg

Which is equivalent to

lux /path/to/my_images/img[1-3].jpg


This option tells lux how to process image data internally. Per default, process_linear is set to 'yes', because all internal image processing in lux uses mathematics which assume data in linear RGB. But to be displayed on the monitor, linear RGB data have to be converted to sRGB, which is time-consuming. If you pass '--process_linear=no', lux will use the same mathematics, but apply them to sRGB data and omit the final conversion to sRGB. While this is not mathematically correct, often it's 'good enough' and 'looks okay', so lux has this option, which you may use if rendering is not fast enough. If the display process leaves image brightness, black point, white point, and white balance untouched, this won't be very noticeable, even though the image pyramids won't be strictly correct either, because they should be set up with linear RGB data as well.


This option sets the 'source projection' of the input to lux, the one used for the images you pass to lux for display. It's essential that lux uses the correct projection, but oftentimes it can't figure the projection out from the input, in which case you should supply the information with this option.

The default value for this option is "auto", meaning: "try and glean the information from the input, and use "mosaic" if you can't find anything". This works well for 'normal' images without metadata (they will be displayed 'flat', and without perspective correction) - and for panoramas with proper metadata. If you merely display images with correct metadata, you'll never need to use this option. Proper metadata make your images 'fit' for display with lux without any options needed to display them correctly - if you produce, e.g., panoramas, your best practice is to add correct metadata.

When you look through the list of possible values for this option, you first see the non-mosaic projections which can be used for single-image input: "spherical", "cylindric", "rectilinear", "map", "mosaic", "stereographic" and "fisheye", where "map" and "mosaic" are synonyms. Each of these can be abbreviated with a single letter (note that the abbreviation for stereographic is 'g' because 's' stands for 'spherical' already which is more common), and since the projection is such an important parameter, it can also be passed as a short option '-p'. So indicating that an image is in spherical projection can be expressed with '-ps'.

After the single-image projections, you'll notice two more 'projections': "cubemap" and "facet_map". Both of these 'multi-image projections' can only be passed in lux ini files or will be used internally when PTO input is processed.

Passing a projection to lux does not require that the image actually is done in this projection - it only tells lux which projection to use. If the image is done in a specific projection and you pass the same to the lux invocation, the result should be geometrically correct, but you are perfectly free to pass different projections, which may be useful to create effects.

If you pass any projection other than 'map" or "mosaic", you must pass the horizontal field of view as well, because lux has no way of guessing it. Note that even if the image has the horizontal field of view in it's metadata, if you do pass the projection, this value will be ignored and you are required to pass the horizontal field of view as well. This is a precaution against the two sources of information becoming mixed up.

Note that there is another option called 'target_projection' which sets the projection used for the view you get to see. The target projection defaults to rectilinear and you'll rarely choose something else, but you have the same choice of single-images projections for the target projection as you have for the source projection, and if you use lux to, e.g., render spherical panoramas, you have to set the target projection right to get correct output. Images created by lux will always have projection and field of view metadata if the image format supports them - openEXR output is supplied with a 'sidecar' lux ini file for the purpose.

One more thing: If lux renders an image from input which is not in 'mosaic' projection, you'll get output which has projection=rectilinear in it's metadata (unless you've set a different target projection). If you view such images with lux, the rectilinear projection and the horizontal field of view from the metadata will be recognized, and the image will be displayed with automatic perspective correction. While this is technically correct, it may be surprising. If you want such images to be displayed 'flat', remove the projection metadatum or invoke lux with --projection=mosaic, or -pm for short. The command line option overrides the metadata.


Here's the verbatim copy from the README file:

'shrink factor' from one pyramid level to the next. This also determines how many levels there will be; if the value is small the number of levels will rise, possibly beyond a manageable measure. Typical values: 1.25 - 2, and lux silently enforces a minimum, to prevent the creation of very 'steep' pyramids which need a lot of memory with no discernible effect. The default here is 2: each pyramid level will be roughly half as wide and half as high as it's predecessor. If you use, say, 1.41, each level will have roughly half as many pixels as it's predecessor.

With 'area decimation' for downscaling, which is now the default, the amount of smoothing adapts automatically to the scaling step, because this filter is adaptive. When using lux' 'classic' mode of downscaling with a b-spline reconstruction filter, you can vary the degree of smoothing - a smoothing level of 7 fits well with a scaling step of 2. Keep in mind that no scaling operation is 'perfect', especially not the methods lux uses, because they are chosen to be fast to compute rather than extremely precise. So if you choose smaller scaling steps than two, you'll get more downscaling operations, each degrading the image a bit, and when this cumulates you may get noticeable blur in heavily downscaled views.


Here's the verbatim copy from the README file:

smoothing level. lux now uses 'area decimation' as it's default downscaling method; this will work with pyramid_scaling_step in the range of 1-2. To use this decimation filter, pass --pyramid_smoothing_level=-1. This is fast and the result looks good, so I decided to make it the default. Another good quality decimation filter is applied with --pyramid_smoothing_level=-2, this uses a binomial filter (1/4,1/2,1/4) for downscaling. lux' 'classic' method was using a b-spline reconstruction filter of large-ish degree without prefiltering. You can get this behaviour by passing positive values, which set the degree of the b-spline reconstruction filter.

The remarks above about scaling steps less than two apply for this downscaling method as well, so only pick a scaling step other than two if you need to. When passing -2 here, your scaling step should not be too far off two. The classic downscaling method is using a b-spline reconstruction filter of the degree passed to this argument. lux' standard was 7, since a reconstruction filter for a heptic b-spline is close to a 'standard' Burt filter. Use a value below 7 for less smoothing (like, when you use a 'shrink factor' below 2). This is a matter of taste, really, and the differences are quite hard to tell. You want to use a level of at least 2, because levels 0 and 1 don't produce any smoothing at all, and level 0 does not even interpolate, so you get bad aliasing. Level 7 reconstruction sounds like as if it takes lots of processing, but since we're only scaling, we can use vspline's grid_eval method which is a good deal faster than ordinary remaps.

There are now additional downscaling filters available, which are activated by passing negative numbers and should be used with scaling steps near 2.0:

Especially using the half-band filters should produce near-optimal results, if my theoretical reasoning is correct: An optimal half-band filter should remove all frequency content above half the Nyquist frequency. Applying it 'on the fly' by convolving the b-spline reconstruction kernel with it (using a 'convolving basis functor') produces a signal which should be immune to resampling (if the spline is of sufficiently high degree), so the subsampling will yield true values for all off-grid locations. The other filters tend to have wider transition bands, leading to aliased results from the remaining higher frequencies.

A bit of technical background: all downscaling filters lux uses catch two birds with one stone: the low-pass filter and the subsampling are lumped together in one handy step using a grid evaluation on the 'current' level to get the 'next' level of the pyramid. This approach makes it possible to use a sampling grid which does not coincide with the sample positions of the 'current' level, as would be required for the 'normal' process of using a smoothing filter, followed by decimation. It's fast and efficient, and with area decimation the effect is roughly as good as a binomial filter followed by decimation by a factor of two. But the 'freedom from the grid' makes it possible to use a subsampling grid which preserves the boundary conditions: if the current pyramid level has, for example, periodic or reflective boundary conditions, the 'next' level will as well, and, when fed appropriately scaled and shifted coordinates (using a vspline::domain) it will behave as the 'current' level, only yield smoothed values instead. It's clear that this 'boundary equivalence' would only be possible for a few rare exceptions of 'current' grids (periodic grids with even sample counts, mirror boundaries with 4n+1 samples) when using filter+decimation, and certainly not for the reflective boundary conditions which lux mainly uses. Most grids can't be decimated to produce a 'boundary-equivalent' down-scaled version. If you're interested, have a look at the decimation code; it's in, find 'make_decimator'.

On the downside, off-grid subsampling with a decimator isn't easily tackled mathematically - especially not with the 'area decimator', which does not have a fixed transfer function. Initially I tought this might be a problem, but I found the results satisfactory and could not detect any drawbacks. So I'd say: the proof is the pudding. As mentioned above, the desired behaviour should be approached best using a half-band filter for downscaling. This takes more time to set up, but small half-band filters like 11-tap or 15-tap are not much slower.


This option sets the the degree of the b-spline interpolator used for still image display. The default is 3, a cubic b-spline, which is smooth and continuous to the second derivative, and still reasonably fast to compute, using a weighted sum of sixteen coefficients for each result value. For still images, the speed of computation is much less of an issue than for animated sequences, because they are only rendered once the view comes to rest, and the user will rarely notice the time span needed to do so.

Note that values above 7 will silently be set to 7.


This option makes lux ignore image metadata specifying rectilinear projection - instead such images are displayed 'flat'. Only if --projection=rectilinear is passed on the command line or via a .lux file, lux will use rectilinear source projection.

Why this option? When viewing image sets where 'flat' and rectilinear images are mixed, the different display of the rectilinear images and their automatic perspective correction tend to feel inconsistent. So with this option active, all 'flat' and rectilinear images are displayed alike, but images in 'real' panoramic projections like spherical are displayed as such. If rectilinear images need to be processed further, this option should be switched off (the default) to get proper perspective correction etc. - having it on is more for slideshows and the likes.


This option reverses the effect of click-and-drag gestures with the primary mouse button. Using click-and-drag as the main gesture to interact with the display gives rise to the question of how to interpret it: do you think of interacting with the image you see or do you think of interacting with the virtual camera producing it? Both ways of thought are valid, and this option allows you to change lux' behaviour. The default is to think in terms of interaction with the virtual camera, so, e.g., click-dragging to the right will move the virtual camera to the right, hence the content seems to move to the left.

The one thing you can't do in lux is to make click-and-drag be interpreted 'as if the mouse pointer were glued to a spot in the image'. This common way of interacting with images is okay for, say, image processing programs - but try and affect a smooth pan over several screens' worth of data with such controls, and you'll see why lux does things the way it does.

This options only affect click-and-drag gestures with the primary mouse button, us the next option for the secondary mouse button.


This option reverses the effect of click-and-drag gestures with the secondary mouse button.


when true, 'secondary click vertical drag' produces a 'focused zoom' which keeps the content at the click position roughly steady.


when true, the mouse scroll wheel produces a 'focused zoom' which keeps the content at the click position roughly steady.


If lux encounters a file which it can't process, the default is to silently ignore the file and try the next one - if any. By setting show_error_dialog=yes, you can make lux produce an error dialog box if it encounters a file it can't handle. If a PTO file or lux ini file refers to an image which can't be handled or does not exist, this is an unrecoverable error, though.


Normally, lux displays a status line. The precise content of the status line may vary, but with this option you can switch it off altogether, which is nice for 'presentation' situations like slide shows or digital signage.


time after mouse button depression, in milliseconds, during which a mouse button release triggers a single-click. If the button is not released during this time slot, the interaction is taken to be a click-drag.


When showing slide shows, lux switches to the next image after a certain time, the 'slide interval', has passed - if you don't interact with the view. The default is seven seconds, but you can change the value with this option, or via the numerical field next to the 'SLIDES' button in the GUI. The value is passed in seconds, and you can pass fractions - you can pass very small values, too, but there's only so much you can achieve - lux will still read the image from disk and produce an internal representation, which takes some time, so there's a technical limit which this setting won't undo: lux won't try and move to the next image before the current one was set up to be displayed.


With this option, you can start lux with active slideshow mode, the default is to have slide show mode off on startup.

There is a short option '-d' which sets the slide show interval and activates slide show mode at the same time, so -d3 has the same effect as the more verbose '--slide_interval=3 --slideshow_on=yes'.


This option tells lux to take snapshots with the same projection and aspect ratio as the source image, or one specific image in a facet map or cube map - which can be chosen by passing 'snapshot_facet'. If there is no snapshot_facet argument, the first facet is picked by default. Note that numbering is C-style and starts with zero. So, while 'normal' snapshots take their aspect ratio and base size from the current view, 'source-like' snapshots take it from a source image. And while 'normal' snapshots produce an image in the given target projection, 'source-like' snapshots use the source image's projection. 'source-like' snapshots are meant to produce images which might be used instead of a given source image, with all modifications applied by the viewer. This does not include lens correction and vignetting, though, so if you have source images in a facet map which have either, you won't be able to reproduce them completely - instead you'll get a rendition of an image with the source image's projection, orientation and field of view, but the content is generated with all lens and vignetting parameters applied.

Snapshot magnification is still applied, so you can easily produce scaled 'source-like' snapshots, for example when you produce snapshots for web export:

lux --snapshot_magnification=.33 --snapshot_like_source=yes \
    --snapshot_compression=80 some_image.jpg

There is one important point here when doing 'source-like' snapshots, stitches or fusions done from PTO files: For such output, the 'p-line' in the PTO file defines the output's projection, size, field of view and cropping. With this mechanism it's easy to use lux as a stitcher if you have PTO input: either you load the PTO into lux and then press, e.g. 'Shift+P' for a source-like stitch, or you batch the process by using --next_after_stitch=yes. So to stitch a panorama from a PTO automatically and without user intervention, you invoke lux like this:

lux --next_after_stitch=yes --snapshot_like_source=yes pano.pto

And to do an exposure fusion from a PTO, use

lux --next_after_fusion=yes --snapshot_like_source=yes pano.pto

snapshot_like_source is useful to 'imbue' an image with HDR information from an exposure bracket, used together with --blending=hdr. Another option used in this context will likely be --snapshot_extension=exr, and it works well with next_after_snapshot (see below), making it easy to process sets of exposure brackets in one go. If you have brackets in folders in the current working directory, with the registration in 'bracket.pto', you'd do something like this:

lux --blending=hdr --snapshot_like_source --snapshot_facet=0 \
    --next_after_snapshot --snapshot_extension=exr \

This will place an exr snapshot of each blended bracket next to the pto file defining it. While the output is created, you'll briefly see each image as it is processed. You might prefer not to 'imbue' the first (#0) facet: if your camera produces, say, the shortest exposure as the second shot of the bracket and you'd like that shot to be 'imbued', just add --snaphot_facet=1. It's wise to 'imbue' the 'darkest' shot: in it's boundaries, it will have the most valid intensity values, because overexposure is least likely in the 'darkest' shot. The data it holds may be noisy when 'pulled up', but the noise will show only where the 'brighter' facets don't provide usable content, which is only a small part of the image near the margin if the facets don't overlap perfectly.

So you may get thin stripes with noisy data - rather than thin stripes with overexposed pixels which are definitely worse. Keep in mind that, when --snapshot_facet is not passed and the input is a PTO file, the output will match the PTO's p-line, and it will often be cropped. If you intend to use the fused brackets to, e.g., stitch a panorama, this may be a problem, because you'd prefer uncropped images with equal FOV. So to produce input for panoramas, you're probably better off with --snapshot_facet set explicitly to one of the source images in the bracket, never mind the slight artifacts near the image boundary, and because lux stitches strictly by geometry, the artifacts along the edges will not usually show.

In this context, I'd like to point to lux' 'compound' options or 'actions', which bundle several options to perform a common operation. These compound options are used to produce 'standard' stitches and exposure fusions, focus stacks, faux brackets and HDR merges, which all use snapshot_like_source internally - so, if you work from a PTO file, the output will reflect the settings in the PTO file's p-line. Find out more about compound actions in the relevant sections:

stitch fuse hdr_fuse hdr_merge focus_stack compress


This is an integer option used with facet maps. The default is -1, meaning 'unset'. If input is a PTO file, it's p-line will be honoured if snapshot_facet is unset. If snapshot_facet is set to a value of zero or above, the facet with this number will be used as the 'template' for the output - internally, lux uses the term 'nonstandard target', as opposed to the 'standard target', which is the window showed on-screen, or the whole screen in full-screen mode.

snapshot_facet can be used to create exposure-fused output which matches the specs of one of the source images (minus lens distortion, vignetting and varying brightness), which may be desirable when the result is used for further processing: then, you'd usually use the darkest image as template, because it contains an area which is least likely to lack well-exposed content, whereas the default - zero - may be another facet: with my canon cameras, for example, the first image in a bracket is the middle exposure.

If you want to make a panorama of several brackets, leaving it up to the specs in the p-line may produce images which have slightly different hfov and aren't correctly centered (auto-cropping does not remove equally-sized bits from all edges, so the result is not centered right, and if it's saved without recording the uncropped size and the relative position of the cropped area, the correct geometry can't be reconstructed). Using the specs of one specific facet will produce equally-sized fused brackets, which may have unwanted bits along the margins - but the margins usually don't make it into the final panorama anyway, and, if needed, they can be trimmed easily to a common size.

Another use for snapshot_facet is to create artificial images with the content of a stitched panorama, but the specs of the source images: If you take (stitched) snapshots (using Shift+P) with snapshot_facet set to 0, 1, etc, you'll get a set of images shaped like the source images, but filled with the stitched content. The 'artificial facets' will not always look precisely as the original facets: When creating 'source-like snapshots', lux does not apply artificial vignetting or lens distortion, which would be necessary to create artificial facets matching the original facets precisely, nor does lux apply different facet brightness. What you get is artificial facets with the same width, height, hfov, and projection as the source facets, and the processing of lens distortion in the source images may result in bits of 'black area' showing in the space which is transferred to the artificial facets, and other bits being cut off along the margins. If your original facets don't have lens correction active, the artificial facets should match the original facets 1:1 geometrically.


This option sets the snapshot magnification. Snapshots are taken with no magnification by default, unless this options is passed a value other than 1.0, which overrides the default. Note that this magnification produces an image which is like what you'd see with a window of the modified size, even if that is larger than your screen. It's not just a blown-up version of the current view, but instead calculated from scratch, rendering to a pixel array of the magnified size.

This parameter is needed to produce snapshots of a given fixed size: the shape of a 'normal' snapshot will always be the same as the shape of the current view: either the shape of your screen when you're in full-screen mode, or the shape of your display window. Suppose you want to stitch a full spherical panorama sized 6000X3000. You'd use a display window with a 2:1 aspect ratio and the appropriate snapshot magnification, like

lux --fullscreen=false --window_width=1500 --window_height=750
    --snapshot_magnification=4 ...

Note that, when 'snapshot_like_source' is set to 'true', snapshot_magnification won't have an effect, and lux will stick to the metrics of the p-line or the source facet. TODO: add an option to that effect


This option sets the prefix used for snapshots, stitches and fusions. The default is to use the source image's filename suffixed with .lux. If you pass --snapshot_prefix=xyz the images will be named xyz1.jpg, xyz2.jpg etc.

Note that the prefix will persist throughout the entire lux session, so you can go through a set of images and take snapshots where you like, and all snapshots from the session will share the prefix and be numbered sequentially.


This option forces the snapshot (or stitch or fusion) base name, and the resulting image will be named by combining this base name with the snapshot extension, with no intervening infixes. Note that if you do several snapshots in succession, only the last one will 'survive': lux overwrites output without warning. This is one way of actually destroying extant images: if you use the base name and extension of an existing image, it will be overwritten without warning. So please be careful with this option - it's intended for script-driven tasks where the 'safer' standard naming with an infixed serial number and .lux. may overwrite previously made lux snapshots but not 'normally named' files.


spapshots, stitches and fusions are done in the background, while your session continues in the foreground. So both processes compete for resources. The default behaviour is to use as many threads for the snapshot as there are physical cores, but to use four times as many for rendering animations. The higher figure for animations results in 'crowding out' snapshots, and if you're not operating too close to the system's limits, animations will remain smooth even while snapshots are rendered. If the defaults don't work for you, you can pass snapshot_threads to select a specific number of threads to which the task is assigned. The minimum is one thread only.

The number of threads you select will be fixed for the time it takes to render the output image, even if there is no other computational load, e.g. because the view is at rest, so if you use only one thread, you may be stuck with the current view for a long time, because lux will only proceed to the next image once all snapshots are ready for the current image. The default of 'as many threads as there are physical cores' will be nearly as fast as using all threads from lux' thread pool when the view is at rest, but if you want to speed things up to the max, you may specify use of more threads, which will have a small effect - up to the number of threads in the thread pool, which is four times the physical cores. This is a good idea for batch mode operation. If you want to continue working on the system, you may want to 'throttle' lux by lowering snapshot_threads, but if you want results as quickly as possible, you may want to raise it above the default.


Many actions in lux are triggered by pressing and holding keys, or by click-drag gestures. These operations are executed with a set speed which I arrived at heuristically. The speed of individual actions can't be set, but you can use 'snappiness' to set the overall reaction speed, just as you can influence this speed by pressing X/Shift+X.

One situation where you want to operate with reduced 'snappiness' is when you have to position or zoom very precisely, and the default snappiness of 0.005 is too fast.


For lux output to file formats using compression, this option sets the compression ratio. The default is 90, meaning 90%.


This option sets the file extension used for lux output, and thereby also the file format. Use one of "JPG", "JPEG", "PNG", "TIF", "TIFF", "EXR", "jpg", "jpeg", "png", "tif", "tiff" or "exr". The default is "jpg".


TIFF output can be in sRGB or linear RGB. The default is to produce sRGB TIFFs, but if you set --snapshot_tiff_linear=yes, the output will be in linear RGB instead.


If this option is set to true (the default), then if lux displays a synoptic view with HDR blending mode (like an exposure bracket), lux will, when the view is at rest, calculate an exposure fusion for the current view and show that instead of the 'fast' rendition. This can be annoying, because the content changes quite noticeably, so you can use this option to switch the behaviour off. With snap_to_fusion off, you'll only see the 'fast' rendition which clamps intensity values to the sRGB range, but your view will remain consistent between animated sequences and still image display.


The default is to use the 'hq interpolator' to render single frames when the viewer is at rest, or even produce 'proper' stitches/exposure fusions (see snap_to_fusion and snap_to_stitch). This behaviour can be switched on/off with F12 or the GUI button labeled 'IDLE HQ', the command line argument only sets the initial state. Using snap_to_hq=false will also disable snap_to_stitch and snap_to_fusion.


If this option is set to true (the default), then if lux displays a synoptic view with ranked blending mode (like a panorama), lux will, when the view is at rest, calculate a blended image for the current view and show that instead of the 'fast' rendition. This can be annoying, because the content changes quite noticeably, so you can use this option to switch the behaviour off. With snap_to_stitch off, you'll only see the 'fast' rendition which displays 'hard' boundaries between the facets, or feathering if that is active, but your view will remain consistent between animated sequences and still image display.


This option sets the stack processing mode. This datum is only relevant for stitching jobs (blending=ranked) and defines how image stacks are put togther before they are stitched together with lux' modified Burt&Adelson image splining algorithm. Note that you will only get to see such stitches on-screen if 'snap_to_hq' and 'snap_to_stitch' are both set, which is the default. For animated sequences (while the view is 'in motion'), lux ignores all stack members but the first, which goes as the 'stack parent', with the assumption that it is the medium exposure. When the viewer is at rest, and only snap_to_hq is set, lux will also suppress all but the 'stack parents'. If snap_to_stitch is also set, lux will invoke it's image splining code, and this is where this option becomes relevant.

There are three different stacking modes: "fusion", "hdr" and "first". The default, "fusion", will submit the stack to an exposure fusion and use the result as partial image of the stitch. If 'stack' is set to "hdr", the stack will instead be hdr-merged, producing a partial image with extended dynamic range for the stitch. This option allows lux to create HDR panoramas from image sets with stacks. The third mode, "first", picks the first ('parent') image from the stack and uses it as a partial image for the stack. This is similar to the view in animated sequences, or without snap_to_stitch, but because all the stack parents in view are now joined with image splining, you get a smooth(er) blend than the view in animations, which shows 'hard' facet boundaries.

Using "first" mode is quite quick, because only the stack parents are processed, but of course all extra information from the other images in the stacks - which you took for a reason, after all - are simply ignored. Nevertheless it's a valid mode to see how well the images fit, and to get a first impression. "hdr" and "fusion" are slow - here, all stack members are made manifest in warped form and expsoure-fused with the image splining code, which will take a fair while. Initially I used lux' pixel-based HDR merging code for stack=hdr, but now I route the HDR merging process through a multilevel blend, which uses the same code as exposure fusion, with modified parameters which result in a multilevel HDR merge, producing pleasing output without - or with lessened (small-scale) artifacts.

This new routing is, so far, reserved for stack processing, so the incoming PTO must have stack assignments. Note that the registration code (e.g. hugin) may assign identical orientation (y, p, r) to all members of a stack, which is only appropriate if the bracket was taken with a rock-solid tripod. You may have to set the software to allow for stack membery which aren't perfectly aligned. If there is only one stack, that's okay, but you still have to asign --blending=ranked, because only 'stitching' jobs do stack processing for now - with --blending=hdr and snap_to_fusion unset, you'll get the pixel-based HDR merging code. I hope to disentagle the parametrization to make the choice of routing more obvious, and I suspect that I'll end up routing all high-quality rendering for HDR output via the new mode of multilevel HDR blending code.

Lux now crossfades into an on-screen stitch, and at times the difference between the unstitched and the stitched view is quite hard to tell - you may have to look specifically to a place where the unstitched view shows a hard facet boundary to see it 'magically' disappear after some time. The change to an exposure-fused view (with stack=fusion) is usually quite noticeable, because it changes brightness values significantly, but the change to an HDR-merged view may be hard to notice, because the on-screen view can only show the standard sRGB range. To verify that an HDR merge has actually occured, make a snapshot to exr format (use --snapshot_extension=exr) and inspect the result.

Note that the new multilevel HDR merging makes an approach for dynamic range compression more feasible: using a two-step process of first creating an HDR rendition and then faux-bracketing the result. As the first step does now produce very nice quality HDR images, the input of the faux-bracketing is good, and the faux-bracketing itself can be tweaked to use just the Ev values for the artificial bracket which best suit the desired effect.


This is another 'compound' option. It makes lux create a 'source-like' stitch of the current facet map and then proceed to the next image, if any. Passing --stitch=yes is meant for batch processing and merely saves you passing a set of individual options to the same effect. Typically, your input will be a PTO file, and the output which lux creates will be to the specifications given in the PTO file's p-line.


When viewing facet maps, passing a facet number with --solo=... will show this facet only, displaying all content which fits the current view. Using this from the command line may be useful to create a set of 'warped' images, by picking an output format which corresponds to some intended output size and then placing content from the facets into it, creating a snapshot for each facet. Such a set of warped images can then be used by a stitching program to blend the partial images, with a few caveats:

The 'solo' facility can also be used by pressing Shift+Right and Shift+Left, which is a handy way to move through the partials while displaying a synoptic view, to get an idea of how each of them is modified to 'fit' into the synoptic rendition. The first Shift+Right will move to the solo view of the first facet, the next one to the next facet etc - and Shift+Left will go the opposite way, ending with the synoptic view.


threshold of horizontal displacement which must be exceeded to recognize the 'slap' or 'spin' gesture.


'squashing' images in lux means discarding some of their original resolution. Lux uses image pyramids to represent images internally. The lowest level of such an image pyramid does normally contain an equivalent of the source image in it's full resolution. The next level up has an image with reduced scale - typically the size is reduced to half the width and height of the base level image, but the factor can be changed by using pyramid_scaling_step. The next level up is again reduced by the same factor, and so on, up to the top of the pyramid. Squashing removes pyramid levels, staring with to lowest one. The new base level therefore has an image with reduced resolution, and fine detail may be lost. So why would one want to do this? To save memory. The lowest pyramid level is also the level using up most memory: for the default scaling factor of two, the base level will use four times as much memory as the next level up, and discarding it frees a lot of memory, which may be scarce if the source image is very large, or if there are many large facets in a facet map (see facet_squash for 'per-facet squashing'). Also, the interpolator built for showing magnified views will be smaller. When squash is used, time to first light will typically be reduced, and especially with large images or facet maps this can make a noticeable difference.

Another aspect is memory access during rendering. Full-resolution data, taking up a lot of memory, have the image information needed for a given view 'spread out' over a larger area in memory than data with less resolution, and the system usually takes longer to 'collect' image information from high-resolution data in memory, which affects rendering times.

Finally, the full resolution offered by modern sensors may be overkill - especially smartphones often have huge numbers of sensor pixels without the optics to match, and the images they produce are unnecessarily large without any visible gain. Squashing them is often a good option.

'squash' is given as a positive integer, and tells lux how many pyramid levels should be discarded. A typical value here is one, but you can try larger values, especially if you're curious to see the effect - using a squash of one is often quite hard to see.


This option sets the still image scaling factor. When the image display is at rest, lux renders an image with it's 'quality' interpolator. This image can be rendered larger than the screen area it will occupy ('supersampling'), and the GPU will compress it to fit into it's designated screen area. If you're not overdoing it, this may give a crisper still image. The default here is 1, which does not magnify. So this factor works just like moving_image_scaling: it modifies the size of the frame, not it's visible content. Rendering larger frames for still images takes more time, but since the still image is only rendered once, the extra time makes little difference: the user won't notice if this single frame takes 40ms to render instead of 20ms. Try it out if you're not satisfied with the quality of your still images, but don't overdo it. Supersampling may result in the use of a lower level from the image pyramid, which accounts for a crisper display. The usefulness of this option has diminished since the implementation of 'area decimation', which always uses the next-lower pyramid level and area-decimates that to yield the still image, which usually results in sufficiently 'crisp' views. lux now renders all still images with area decimation. Using supersampling on top is often overkill.


Set a frame number limit. You can also use the short option -z. When the given number of frames was generated for the current image, lux will proceed to the next image - if any. This is good for automated testing/benchmarking: I like using a 1000-frame pan over a full spherical to benchmark my code, like

lux -ps -h360 -A.05 -z1000 spherical.tif

It can also be used for digital signage. If you have a set of panoramas img*.jpg, you can run an automatic pan over all of them by invoking lux with autopan enabled and a frame rate limit, like

lux --autopan=0.05 --stop_after=2000 img*.jpg

Note, though, that if you merely keep repeating the same animated sequences, you may be better off recording lux' output to a video an looping over that - this should consume much less power.

If no frames are generated, the limit will not be reached, so usually you'll combine this option with autopan. If it's active during 'normal' viewing, it depends on how many new frames your activity produces whether the limit is reached. The count starts afresh with each new image.


This boolean option can be used for benchmarking. If set to 'yes', lux will generate frames as usual, but they won't be sent to the GPU for display. So if you do performance measurements with this option active, you measure only the CPU load used for rendering, but not the GPU load used for displaying the view.

This, at least, is the theory. Currently, I get funny readings using this option and frame creation with 'suppress_display' on takes longer...


This option sets the projection used for the view which lux shows on-screen. This is not to be confused with the option 'projection', which determines the projection lux assumes for source images.

target_projection can be set to any of the single-image projections lux can handle: "rectilinear", "spherical", "cylindric", "stereographic", "fisheye" or "mosaic". Note that "mosaic" is special insofar as it will only work for "mosaic" source projection.

Because lux normally renders the same content that the on-screen view displays, this option also determines the projection of snapshots you take, unless you're taking 'source-like snapshots', where the metrics depend on the source image's (or a PTO file's p-line). So if you want to produce a spherical panorama, you must use "spherical" as target_projection.

Again, using a specific target_projection does not automatically produce 'correct' output - if, for example, your source projection is set wrongly or the horizontal field of view is wrong, the output can't be correct (except in a few corner cases where two wrongs make one right, if you get my drift ;)

The target projection and field of view will be written into the metadata of images produced by lux - or, for openEXR output, which can't hold metadata, it will be written to a 'sidecar' lux ini file. If such input is viewed with lux, what was the target projection when the image was made will then be the source projection used for viewing it. If you start out with correctly source-projected input, lux output will also be correct.


Setting this boolean option to 'true' applies a very simple tonemapping operator to the view, which compresses the dynamic range so that brighter content is forced into the dynamic range of the view (sRGB). Because this uses a simple compression curve on the intensity values, it will reduce contrast. This option can only handle a certain amount of 'extra' brightness (2 Ev), intensity values beyond that will still be capped. You can also press F9 to toggle this option.

The curve used is: out = 318 * ( in / ( in + 255 ) )

uncropped_width, uncropped_height, uncropped_hfov, uncropped_vfov

These options are used internally to define metrics of source images with active cropping, so they need '--cropping_active=yes' to be honoured, and also values for cop_x0, crop_x1, crop_y0 and crop_y1. Such source images occur when lux stitches images from PTO files where the output is cropped to an area smaller than the 'nominal' field of view. Some stitchers use 'cropped TIFF' output for the purpose, but I prefer a set of metadata which describe the cropping completely and do not rely on capabilities of the image format, so I introduced 'lux metadata' which include all metrics needed to completely describe cropped images. If lux finds such metadata, the image can be displayed adequately.

But there is nothing stopping you from passing the cropping information on the command line, as long as you provide the complete set. Note that you can safely omit the 'y', or 'height' or 'vfov' values, which can be calculated automatically with given 'x', width, or hfov values.

Note that the 'uncropped_width' you pass corresponds to the 'nominal' field of view, given as 'uncropped_hfov', and the x0 and x1 values tell lux where in the range of the uncropped width the source image should be placed.


This is a boolean flag, and true by default. Lux will always honour source image cropping, and masks from a PTO if use_pto_masks is not switched off.

With use_pto_masks active, lux will honour all masks in PTO files, In a nutshell: if you render a synoptic image from a PTO with masks, the outcome should be as expected, but the 'live view' while the viewer is not at rest is not always right for panoramas with stacks, see below for details.

Any content which is masked out with exclude masks should not have any effect on the result - the effect is just like opening the image in an image editor and painting the mask's area with full transparency, and in fact you may opt for this way of masking out unwanted content instead. I added exclude mask support because it's a handy PTO feature and I use it frequently with hugin - it's just more convenient to draw a mask in the stitcher with a few lines, than to modify the source image. And it's also non-destructive and can be easily undone.

Content inside include masks will receive high priority, resulting in 'pushing it to the foreground'. The implementation of include masks is very recent and not yet well tested, and has some issues which are not resolved optimally: include masks are implemented by prioritizing content, rather than by alpha channel manipulation. The prioritization is an either-or effect, so the edges of the prioritzied region are hard (as opposed to exclude masks which are slightly feathered) and there are staircase artifacts at the order of magnitude of the facet image's resolution. Both these shortcomings will be visible in the 'live' view if the view's magnification is sufficiently large - if the viewer is at rest and idle-time processing results in multi-level blending, these artifacts will be less of an issue. So why not 'translate' include masks to the alpha channels of the other facets? Because facet-to-facet transformation can be very slow, if the target facet has active lens correction. To handle this properly, the inverse of the lens correction polynomial has to be calculated, which is slow - and this would have to be done for every pixel, because the straight lines of the mask polygon in one image may well come out as curves when projected to another one. Just projecting the polygon's corners is not a generally viable option. I do that for stack-aware masks, assuming that the stacked images will be 'roughly the same', but for the general case, with facets varying wildly in orientation, fov, and projection, this assumption is not reasonable. A possible workaround would be to add corner points to the polygon along it's edges, to make the 'bent edges' problem less severe.

Another shortcoming is due to the fact that the 'live view' will only show stack parents. If stack parents have exclude masks, the live view will show the masked stack parent image, even if other images in the stack would 'fill in' the excluded area, whereas the idle-time view will fuse the stack, with such areas which are masked out only in the stack parent suddenly 'appearing' when the 'proper' blended view is ready after a while. A similar effect occurs with exclude masks on facets which are not stack parents: they won't have an effect in the live view, because only stack parents are processed in the live view. When the 'proper' blended view is rendered, their masking will become apparent. Limiting the live view to stack parents is for performance: I think it's better to maintain a good frame rate during 'navigation' than to approximate the final outcome closely. After all, the live view of a PTO file isn't meant for extensive viewing - it's an analytic tool which may, under certain constraints, be 'good enough', but really the PTO should be used to render a synoptic image, which will make for a more fluid viewing experience.

The fifth type of masks is per-lens exclude masks. These masks are applied to all images with the same lens. PTO format does not store an explicit lens number, so lux looks at fhov, projection and lens parameters a, b, c, d and e, and if they are all the same, the lens is taken to be the same. Here, copying the mask to all images of the group is always correct: the mask is used to remove lens-specific parts, which always occur in the same part of the image and are unaffected by orientation, so there is no need for coordinate transformation.

There is another type of mask which lux honours unconditionally: source image cropping masks (rectangular or elliptic) which are used to mask out unwanted edges of source images, typically where the lense's field of view is so small that the image proper doesn't cover the entire sensor. This is typical for 'circular fisheyes', where the center of the image has the roughly circular fisheye image, surrounded by more or less dark area which corresponds to the inside of the lens body. If such bits are not masked out, they will bleed into the result. Such source image cropping is notated in PTO as a field in the i-line starting with a capital 'S', like 'S3551,6814,89,3352'. Source image cropping as also done with an alpha channel manipulation, just like exclude maks.

One last thing about stacks in lux: lux will always honour stack assignments in the PTO, no matter what the actual geometry is. You can have stacks with images which don't fit very well, and they will still be fused for the final result. This is done differently in hugin, where images which "don't match well enough" are 'un-stacked', whereas images which are oriented similarly may end up in a stack without having been assigned to it.


This boolean option is set to true by default. This setting seems to work best in most environments, resulting in smooth animations without dropped frames or stutter. If you set this option to false, you should also set a frame rate limit (see frame_rate_limit), otherwise lux will simply try and display as many frames per second as it can produce. This option is directly passed through to SFML, which knows these two modes of setting the frame rate, see


If you pass --version=yes, lux will echo it's version number to the console and terminate.


This option sets the vertical offset angle, in degrees. You can also use the short form, -y. This value is important for panoramas which don't have the horizon in the vertical center position. The automatism in lux assumes that's where the horizon is and sets this value to half what's left after subtracting the vertical field of view from 360 degrees. But if the horizon is elsewhere, you must use a different value - or manually correct the horizon position using the H/Shift+H key.

This only affects images where a misplaced horizon is possible: full sphericals, for example, have no 'spare angle' left, unless you 'cheat' by passing a smaller vfov.

This option is similar to GPano's CroppedAreaTopPixels, but uses degrees, and the angle is measured not from the zenith but from the point 'just behind you', which I sometimes refer to as the 'back pole'. This choice is so that the meaning of the angle is the same in the horizontal (see horizontal_offset) and vertical direction.

This option is good to show 'little planets'. If you have a full spherical, try passing -ps -h360 -v90 -y90, then go the the nadir (PgDown) and zoom out. Using degrees here makes the parameter usable for all projections, whereas using a pixel value is only possible for projections where a 'full extent' exists. Consider cylindrical panoramas: their 'full vertical extent' would be infinite. Using degrees avoids this problem.


This option sets the vertical field of view for the source image. This option is rarely used: normally, the vertical field of view is determined by the horizontal field of view, the offset, and the projection. But at times you want to 'cheat' and make lux use a value different to the one which would be appropriate to the given hfov etc.

An example of how this can be made useful is in the option just above (see vertical_offset), where it's used to display a 'little planet'

window_width, window_height

Per default, lux works in fullscreen mode, but you can switch it to display in a window by passing --fullscreen=no or -W for short - or pressing F11 to toggle the mode. Without furter specifications, the window will be 80% of the full screen. If you require a different window size and want to fix that size via command line parameters, use 'window_width' and 'window_height'. Note that this may not do what you ask for if your windowing system interferes: on my system, if I specify window sizes larger than the screen, the system gives me a smaller window, even though I can 'manually' create a window which is larger than the screen area. This is annoying, but so far I haven't found a way around it.

Why does this matter? Because of snapshots. Snapshots done with E, P or U create output of the same shape and with the same content as the currently displayed window (or the full screen, in fullscreen mode) - magnified by the factor fixed by snapshot_magnification. Sometimes you want snapshots of a precisely defined size, and if the snapshots need to be larger than the current screen size, you have to combine a smaller window size and a snapshot magnification factor to arrive at the desired output size. This would be easier if you could simply specify a larger window size and leave the snapshot magnification factor at 1.0. You can use a real factor for the snapshot magnification, so this is only an inconvenience. Let's suppose you require square 3000X3000 output and you can only have windows up to 800 in height. You might use these options:

lux -W --window_width=750 --window_height=750 --snapshot_magnification=4 ...


This option tells lux how many worker threads to employ when rendering 'ordinary' frames for on-screen display. The default value - zero - tells lux to use the number of threads which is the rendering back-end's standard; with zimt and vspline this is twice the number of physical cores. This seems odd: why specify more threads than there are cores? Because more threads get more CPU time slots. Typically the rendering gets a bit faster with this 'elbowing' method. To use a different number of worker threads, you can pass any number here, but there is a limit: the rendering back-end's thread pool population. Passing larger values won't have an effect.

This option comes in handy when running benchmarks and is a good candidate for being combined with suppress_display. This command, for example, will give you the average frame rendering time using a single thread and no visible output for a thousand-frame-pan over image.jpg:

lux -n --worker_threads=1 -z1000 -A.05 -ps -h360 image.jpg


displayed 1000 frames, which took 30.95034ms on average to create

You'll only see the lux splash screen while the test runs - the frames are rendered, but not sent to the GPU. This reduces system load and gets you the 'pure' rendering time - with only one thread the outcome will be the same because system load will be routed to the free CPU cores, but since displaying the frames produces some CPU load (this varies from system to system), when you don't leave one core 'unemployed', displaying the frames will 'spoil' your rendering time test.

Try different numbers of worker threads and see how well rendering utilizes the CPU cores! Since the worker threads use only a minimal amount of synchronization, there is next to no multithreading overhead.