lux - a free open source image and panorama viewer

Impatient? Here's the fast lane:

Mac users, please note that there is a separate chapter Using lux on a Mac further down, describing how to install and launch lux on a Mac! Linux users, please note that I recommend using lux as an AppImage. Just download it, make it executable and run it like any other binary, no need for a distro-specific package.

Documentation is here:

lux is free, open source software, licensed under the GPL v.3, please see the file 'LICENSE' for the text of this license. Copyright is by me, Kay F. Jahnke. There are sources from myself and other authors in the lux source tree, which are licensed differently; please refer to the beginning of the files or to license files in the folder containg these sources. If you want to find out what other libraries lux relies on, have a look at THIRD-PARTY-LICENSES

lux is multi-platform and runs on Linux, Windows, MacOSand freeBSD.

lux is an image viewer for 'normal' images and the most common types of panoramic images, typically showing a 'rectilinear' view to the image data, which looks as if this view had been taken with an 'ordinary' lens. The view can be zoomed, panned, scrolled, rotated and modified in several ways. lux displays images, it does not modify them. But it can produce high-quality 'snapshots' from the view it shows. lux can also produce synoptic views of several images and do stitching, HDR blending, exposure fusions, focus stacks and deghosting, usually from 'PTO' files, processing a subset of the panotools standard used by panorama stitching software like hugin.

lux now uses OpenImageIO for image import, so it can open a variety of image file formats, including JPG, TIFF, PNG, and EXR. With the introduction of OpenImageIO to lux, lux can now also open files you would not necessarily expect, like camera RAW files in various formats and single images from video files. lux can also open PTO files - this is a format desribing synoptic views for panoramas, used by software like hugin.

When displaying PTO files, lux will show a synoptic view of the set of images in the PTO file which can be manipulated like a single image view. lux also has it's own file format, the 'lux ini file', or 'lux file' for short, using the .lux extension. It's a simple list of key-value pairs, like in 'regular' ini files, and it's used to bundle sets of parameters.

You'll get the hang of using lux quickly - if you've used other image viewers before, the basic UI should come naturally, but beyond that there is a lot more to discover, and viewing panoramic images goes beyond merely looking at a rectangular view of some larger rectangle - with a 'full spherical' you can 'look' all around you, and lux is a tool to let you do just that. The user interface is inspired by 360cities' QTVR mode and may feel strange to new users, but it's well-suited for panorama display. Because lux presents both 'ordinary' and panoramic images with the same user interface, it's ideal to show mixed selections: with 'ordinary' image viewers you have to tell the viewer to use another program when a panorama comes up and vice versa.

lux' stitching and exposure blending capabilites are steadily improving, and lux now employs a modified version of the Burt and Adelson image splining algorithm for panorama blending, exposure fusions etc. which produces appealing results. Such rendering takes more time than the 'live' view and will only be done for on-screen display when the viewer is at rest.

lux is now distributed in binary form from the project's download page at bitbucket, please visit https://bitbucket.org/kfj/pv/downloads/ to find ready-made AppImages for Linux, .dmg files for macOS and installers and portable .zip files for windows. You should find 'stable' builds, and possibly current development snapshots which I release from time to time to enable users to test new features before I carve out a new release. The AppImage with 'master' in the file name is usually the most recent Linux build available. On the download page, you'll also find up-to-date documentation in HTML format for download.

If you prefer to build from source, please look at the build instructions further down. lux is now using CMake and the build process is quite straightforward. If you have a ready-made binary, skip to First Launch

If you find bugs, or if you'd like to get in touch to discuss features or make requests, please use the issue tracker on lux' bitbucket page. There are also several lux-related threads on the hugin-ptx mailing list.

here's an overview of lux' features:

lux is very comprehensive software, with many features you may not expect in an image viewer. And many of the features need your interaction to be used or useful. Please do make an effort to read the documentation, to at least give you an idea what you can and cannot do, if you intend to do anything beyond just viewing a few images with the automatic default settings. I have recently rewritten lux' GUI, which is now a reasonably traditional GUI made with Dear Imgui, sporting a set of 'panels' which give you control over most lux options. This should reduce the need to pass command line arguments, but of course you can still do so if you like - this route of parameterization has not changed. I have not yet written documentation for the new GUI in this README, but all GUI elements have tool-tips - just hover the mouse over the little question marks next to the GUI entities. Getting to know lux and it's - not always obvious - ways will take you some time, but as a 'reward' you may find that lux is the only viewer you'll want to use - this is my design goal: one tool, one uniform UI for all images, on all systems.

The GPLv3 license applies to the program as a whole and to lux sources wherever it says so in the source code. I have chosen to put several sources used by lux into this repository to make it easier for users to get up and running if they wish to build from source, and I also included the default font, together with it's attached license file (as required by it's license). Please take note of the license information given in each of these files or the associated readme files. Note that the copy of my vspline library which is distributed with the lux repo is licensed more liberally. This version of vspline tends to be ahead of the stand-alone vspline distribution because I tend to try out new vspline features in lux before integrationg them into stand-alone vspline.

Note that until March, 2021, lux was called 'pv', which was not really a 'proper' name but short for 'panorama viewer'. I changed the name to avoid the name collision with 'pv, the pipe viewer', and also to give it a proper name, which - I hope - does it justice. lux is latin, and it means 'light' - the name was chosen because lux translates data into visible light. You may still find references to 'pv' in the documentation and the source code - the name change will take a while to 'trickle down'. The git repository still goes by the old name, and the old makefiles still produce binaries named pv or pv_XXX where XXX is some suffix. The sources also are prefixed with pv.


Building lux with CMake

Before the CMake build process was established, I used several branches for different build types. All of these branches are now obsolete, and all binary variants can be built from the master branch by configuring the build with CMake. As the build script, CMakeLists.txt, became more sophisticated, I changed the source code as well, to arrive in a situation where the several options of building lux can now be combined quite freely: you can choose to use one of three SIMD back-ends (or none, if you prefer), on i86 machines you can use a variety of 'flavours' for different ISAs like SSE4.2 or AVX2, and you can build tailor-made CPU-specific binaries with 'native' builds. If there are OS-specific requirements, the CMake script should take care of them as well, so the CMake build is multi-platform - only cross-compiling is not supported. If you want to build for, e.g., a mac, build on a mac.

All CMake options should work for all variants - I made an attempt of setting up the code so that everything is 'orthogonal': If, for example, you want the rendering code to be single-threaded, you can do that by setting USE_MULTITHREADING OFF irrespective of other build settings.

This section outlines the general procedure, the detailed sections further down explain the specific steps for ubuntu/debian, windows and macOS.

First, install the dependencies. You'll need:

CMake
vigraimpex
SFML
libexiv2
clang++
optionally: Vc, highway or std::simd (build dependency only)
optionally: OpenImageIO

How this is done varies from system to system, please look in the relevant section (further down) what is appropriate for your's. Note that using highway or Vc is strongly recommended for good performance. Use of highway is preconfigured as default, see further below for changing the SIMD library.

libexiv2 is moving to C++17 with v0.28 and it has changed some of it's API, notably toLong has been replaced by toInt64. In some of my builds I have moved to 0.28, in some I haven't - for the latter ones, pv_metadata.h had to be patched: all occurences of toInt64 have to be replaced by toLong - a simple global search-and-replace is sufficient.

highway should be present in at least version 1.0.5, which provides the crucial atan2 function in vectorized form. If necessary, it should be built from source. Note that - as of this writing - highway is still under intense development, so if you're re-building lux, it may be worth your while to get a recent pull, build and install from their repo, rather than relying on what your package manager has on offer!

OpenImageIO is a new dependency, which is now used per default in the master branch. If you want to avoid using OpenImageIO, you can fall back to using vigraimpex only by setting the cmake option USE_OIIO OFF.

Next, clone the pv repository and change to it's root:

git clone https://bitbucket.org/kfj/pv lux
cd lux

Kornel Benko kindly provided an initial CMakeLists.txt file to build lux with CMake, which has since grown to allow for many different build variants. The default settings should create a binary suitable for your system, but to get the best performance, you may need to tweak the CMake settings, especially for builds on non-i86 platforms. We enforce a separate build directory. Build a 'standard' lux binary like this, starting out in the root directory:

mkdir build
cd build
cmake ..
make

You should now have a viable 'lux' binary (lux.exe on windows) in the build directory. You can add options to the final invocation of 'make', the most common would be to increase the number of threads working on the compilation, I use two threads per physical core, like:

make -j8

Building with ninja instead of make also works - on some platforms this is the default, and 'make' may not even be installed. Just say 'ninja' instead and ninja will do it's build magic.

Note, though, that the rendering code is complex and requires a lot of memory to compile, so the build may fail if you use too many threads because the system runs out of memory. Also note that some make tools automatically use several threads which may be too much for the system, so you may have to tell them to use less threads than they would by default.

Building out-of-source has the advantage that you can have several builds in their separate build directories, and if the source code changes, all you have to do is to go to each build directory and issue the make command again to get an updated build. Sometimes changes in the CMakeLists.txt, the file controlling CMake's operation, fail to have the desired effect. If that happens you can try and delete CMakeCache.txt in the build directory and run cmake afresh. Note that CMake will 'memorize' any options you pass to it. If you invoke it again, only 'explicit' changes will have an effect, while other options from previous invocations which you don't override will persist. I mention this because it's at times not anticipated.

You can use 'make install' which will install the binary and the GUI font it relies on to a location which is in the system's execution path, making it available for simple calls without path, like other 'regular' programs. Note that, on linux or macOS, you have to use 'sudo' or gain superuser privileges in another way to use 'make install': the directory where the binary needs to go is usually not writable for 'ordinary' users. installing lux will also put the default font in the right place.

binary variants

CMake lux builds can be configured to use one or several specific ISAs for the rendering code (currently only on intel/AMD CPUs), to use a SIMD library for SIMD code, and to use vspline's 'VECTORIZE' mode, which, when switched off, will produce a scalar version without any attempts at vectorization. The latter two are options which are ON by default, so you have to actively disable them for the build if you want to switch them off. Let's start out with the ISA(s) which will be put into the binary.

lux uses what I call 'flavours' - these are specific parameter constellations (like, compiler flags and preprocessor #defines) which are used to create object files from the rendering code. At least one of these flavours is needed, and if no flavours are set ON in the CMake build, the build will force the use of the 'plain' flavour, which is compiled without any flags specifying the use of a specific ISA - this flavour should run on a wide range of CPUs but it will not be very fast. As of this writing, on non-intel/AMD CPUs, this is the only usable flavour - more flavours may be activated, but their use will be suppressed unless the target CPU is indeed an intel/AMD one. For intel/AMD CPUs, lux offers the following flavours (the ON/OFF after the flavour gives it's default setting):

  • FLV_PLAIN ON
  • FLV_SSSE3 OFF
  • FLV_SSE42 ON
  • FLV_AVX OFF
  • FLV_AVX2 ON
  • FLV_AVX512f ON

so, if you want the AVX flavour added to your lux binary, you'd configure the build with

cmake -DFLV_AVX=ON ..

You may opt to have just one single flavour in your binary - e.g. if you want a binary which is to be used only on machines with the corresponding CPU. So, for example, if you build for a CPU which is only capable of AVX instructions, you could switch all flavours apart from FLV_AVX off.

Use of an external SIMD library is activated by setting one of the following three CMake options 'ON' (Vc is ON per default)

  • USE_VC_LIBRARY ON
  • USE_HWY_LIBRARY OFF
  • USE_STDSIMD OFF

This will only have an effect if USE_VSPLINE_VECTORIZE is also ON, which is the default. Switching USE_VSPLINE_VECTORIZE OFF will force generation of scalar code. Switching all of the three flags above OFF will but leaving USE_VSPLINE_VECTORIZE ON will use vspline's 'goading' mechanism to produce SIMD code, which is less effective than the explicit SIMD code available via the SIMD libraries, but usually better than scalar code. It's recommended for builds where you don't have access to any of the SIMD libraries or where the SIMD libraries you can use don't support your target CPU. Note that you can only use one SIMD library. The CMake script will enforce this, so if you set USE_HWY_LIBRARY ON, the other SIMD libaries will be switched OFF.

The first two SIMD library options will use Vc, or highway, respectively. Both variants have seen a good deal of use and perform well, but highway supports a wider range of CPUs and is actively developed. Vc is good for i86 CPUs, and it explicitly supports AVX, the predecessor of AVX2, so if you're running a CPU with that ISA Vc is your first choice. Using std::simd is more involved: you need a std::simd implementation, which is not yet common, and you need to compile using the C++17 standard. The CMake build will set the appropriate compiler flags. The std::simd build is more of a proof of concept and not as well-honed as the other builds. It does currently not support CPU detection at runtime. If you have Vc or highway installed, you can use their CPU detection by adding -DUSE_VC=ON or -DUSE_HWY=ON to the compiler flags for pv_no_rendering.cc only and linking with Vc or highway.

If you choose to use neither Vc nor highway, the resulting binary will not be able to use CPU detection (because lux relies on the SIMD libraries' cpuid implementation). Therefore it will classify all flavours apart from the 'plain' flavour as non-viable and not dispatch to any of them automatically. You may still force the dispatch, though, by overriding automatic dispatch (using --isa=...), and the binary code using ISA-specific compiler flags will usually perform better than the fallback, because the compiler can still emit instructions for the 'better' ISAs and it can also autovectorize, which is quite effective as long as the USE_VSPLINE_VECTORIZE option is also ON.

USE_MULTITHREADING activates use of multithreading in the rendering code. This CMake flag is ON by default. lux can't stop multithreading altogether, but it can stop the rendering code from using multithreading, forcing it to use only a single thread for the purpose. This does slow things down a lot, and usually you'll only want to switch multithreading off for performance measurements or debugging purposes.

Another way of influencing the binary is by passing additional arguments to the compiler. Simply assign a string containing the extra options to the CMake variable EXTRA_COMPILER_ARGS. These arguments will be added to all compiler invocations and therefore they affect rendering and non-rendering code alike. If you want to use compiler args exclusively for the rendering code, assign to EXTRA_RENDERING_COMPILER_ARGS instead. If you want to add compiler flags for individual flavours, you'll have to edit the CMake file CMakeLists.txt.

To control memory consumption and to search for memory leaks, there are two options: 'lux' own', and the use of clang's 'leak sanitizer', which I now prefer, because it does not require active coding effort to keep track of allocations and deallocations. For 'lux' own', set MEMLOG=ON, and to activate the use of the 'leak sanitzer', set LEAK_SANITIZER=ON. The latter option also sets the -g flag, to preserve debugging symbols in the binary. Both options are OFF by default.

If you want to avoid using OpenImageIO, you can fall back to using vigraimpex only by setting the cmake option USE_OIIO OFF.

There's one more build variant I'd like to describe, which is now also incorporated in the main branch: 'native' builds. You can use a flavour named 'FLV_NATIVE' which will result in a build with only the 'plain' flavour, and you add flags specfic to the desired target via the CMake variables EXTRA_COMPILER_ARGS or EXTRA_RENDERING_COMPILER_ARGS. The resulting binary will be specific for the chosen target, hence the name 'native'. A typical 'native' build for an i86 machine, configured to use all features of the the machine the binary is built on would be configured like this:

cmake -DFLV_NATIVE=ON -DEXTRA_COMPILER_ARGS=-march=native ..

I used to automatically insert the '-march=native' compiler flag for i86 builds, but I think it's clearer to require that any flags on top of the 'plain' base are explicitly stated. Note how passing EXTRA_COMPILER_ARGS will affect the compilation of rendering and non-rendering code alike, which can't be done for builds with several flavours which are expected to run on a wide range of CPUs: for that scenario the non-rendering code is compiled with no ISA-specific flags to make it usable on every CPU. You can think of native builds like a platform to which you add any specific compiler flags you want to have active during compilation. Without these additional flags you'll get a binary which will run on a wide range of machines but it will be quite slow.

TODO: in a recent build, the binary crashed when I used -DEXTRA_COMPILER_ARGS=-march=native. I could avoid that by only passing -DEXTRA_RENDERING_COMPILER_ARGS=-march=native, which only affects the rendering code. There seems to be a problem with compiling the remaining code with -march=native.

'native' builds are good for faster turnaround times and produce a small binary. On non-i86 CPUs, the 'plain' flavour is the only one available anyway, so using FLV_NATIVE is redundant. To build a binary for an Apple M1, try and configure the build like this:

cmake -DUSE_HWY_LIBRARY=ON -DEXTRA_COMPILER_ARGS=-mcpu=apple-m1 ..

Because it's not an i86 build, the 'plain' flavour is the only available option, so you needn't specify FLV_PLAIN or FLV_NATIVE, and since you're building for one specific ISA you can add the ISA specification via EXTRA_COMPILER_ARGS, because it can be used for the non-rendering code as well.

A note about building for ARM processors:

Vc does - as of this writing - not support ARM CPUs, whereas highway does. It's possible to build a viable lux binary for ARM machines without using any of the SIMD libraries, relying on 'goading' (so, USE_VSPLINE_VECTORIZE should be left ON) - this was done on two Raspberry Pi machines and on an Apple M1, where the latter performed quite well - but using highway for ARM CPUs should boost performance noticeably. I have initial reports about a lux build on an M1 processor using highway which turned out around 25% faster than the 'goading' variant, using the configuration proposed above. This build is available on the download page.

Running lux on Apple's M1 processor, there was an issue with starting up in full-screen mode, which is still under investigation. The workaround for this issue is to simply start in a window (use -W) - switching to full-screen mode later on worked all right.

If you have managed to build a viable lux binary in a novel way, please share your findings!


Building lux on a debian-based system

get the dependencies (here we use Vc, you might want to ise highway instead, or rely on std::simd which comes with C++17 - that's now the standard lux uses):

sudo apt-get install vc-dev libvigraimpex-dev libsfml-dev libexiv2-dev clang libopenimageio-dev

clone the pv repository and change to it's root:

git clone https://bitbucket.org/kfj/pv lux
cd lux

build with cmake to make a binary 'lux' in the 'build' directory and install it

mkdir build
cd build
cmake ..
make -j8
sudo make install

Of course, all the libraries are available as source as well; recent versions from the respective repos should work with lux. Vc's 1.3 branch, which older linux versions may still distribute, may or may not work - I recommend using Vc 1.4, which should now be provided by most distros. Note that you have to install the 'vc-dev' package, not 'libvc-dev' which is a different library altogether. On my machine (as of this writing running ubuntu 22.10) , there is also a highway package, called 'libhwy-dev'. Both packages work out of the box, performance is roughly similar. When building with highway, I recommend building highway from source, though, bcause it's still under active development and steadily evolving. Vc is now in mainenance mode and does not support newer ISAs. If you want to target AVX512 or ARM NEON, I recommend using highway instead, but on i86 up to AVX2 Vc is still a good option.

To build Vc from source, do this:

git clone https://github.com/VcDevel/Vc.git
cd Vc
git checkout 1.4
mkdir build
cd build
cmake -DCMAKE_CXX_COMPILER=clang++ -DBUILD_TESTING=0 ..
make
sudo make install
cd ../..

To build highway from source, do this:

git clone https://github.com/google/highway
cd highway
mkdir build
cd build
cmake -DCMAKE_CXX_COMPILER=clang++ -DBUILD_TESTING=0 ..
make
sudo make install
cd ../..

You may also want to build OpenImageIO from source - not because the version from the package management doesn't work, but because it is very comprehensive and contains quite a lot of additional dependencies and code which lux does not use at all. I use a 'minimal build', like this:

git clone https://github.com/AcademySoftwareFoundation/OpenImageIO
cd OpenImageIO
mkdir build
cd build
cmake -DUSE_OPENCOLORIO=OFF -DUSE_PYTHON=OFF -DUSE_QT=OFF \
      -DUSE_OPENCV=OFF -DBUILD_TESTING=OFF -DUSE_OPENGL=OFF \
      -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_COMPILER=clang \
      -DCMAKE_INSTALL_PREFIX=/usr/local ..
make -j8
sudo make install

Note how I set the compiler and the install prefix as well. On my system, trying to build OIIO with g++ failed - hence the specification of the compiler. The default installation target is some local file in the OIIO package directory tree, hence the install prefix.


Building lux on Windows

Please note that the 'msys2' branch which was used to build for Windows is now obsolete! Windows builds should be done with CMake from the master branch.

In 2017, with help from Bernd Gaucke, lux was successfully compiled with 'Visual Studio Platform toolset V141', producing a binary which ran under Win10. Performance with this initial build was roughly en par with my Linux builds, while a direct comparison was not possible since the Win10 code ran on a different machine. Since then, the port to MSVC has not been used and it's viability with the current status quo is unclear. I'd welcome an MSVC user to come forward to revive building with MSVC - with CMake set up, this should now be much easier than the initial 'pedestrian' approach.

In late 2019, I have succeeded in porting lux to MSYS2/MinGW64 running on Microsoft Windows 10 pro v. 1903. The resulting binary code ran as expected on two systems, one with an old 2-core core-i5 with AVX units, and one with a recent hexacore with AVX2. The port to MSYS2/MinGW64 was painless; all necessary packets except Vc were available from the package management system. For Vc I cloned the repo, checked out the 1.4 branch and compiled with clang++, and I compiled lux' C++ code with clang++ as well. For W10 and W11, I now use the msys2 route only:

git
cmake
make
mingw-w64-clang
mingw-w64-vigra
mingw-w64-sfml
mingw-w64-exiv2
mingw-w64-x86_64-openimageio
mingw-w64-x86_64-openexr

To build from master with the default settings, you now need OpenImageIO. I usually have to give the entire prefix for the packages I want, like mingw-w64-x86_64-openimageio. To build with OIIO, I also had to manually install mingw-w64-x86_64-openexr.

If you want to avoid using OpenImageIO, you can fall back to using vigraimpex only by setting the cmake option USE_OIIO OFF.

Make sure you're only installing packages mingw64 repository, otherwise your build will almost certainly fail - so don't mix in e.g. ...-clang64-... packages.

git clone https://github.com/VcDevel/Vc.git
cd Vc
git checkout 1.4
mkdir build
cd build
cmake -DCMAKE_CXX_COMPILER=clang++ -DBUILD_TESTING=0 ..
make
make install

Alternatively, you can obtain and use highway. this is now the preferred SIMD library for lux.

See the remarks on building highway and OpenImageIO from source in the 'Building lux on a debian-based system' section as well, they apply here, too, because msys2 works basically like a Linux clone.

Windows builds can now use the whole range of configurations described in the chapter 'Building lux with CMake'. Get lux sources from my bitbucket repo and build with cmake to obtain a binary 'lux.exe' in the build directory. Note that building for windows now uses the master branch, the separate msys2 branch is obsolete. cmake takes care of configuring the build for msys2, the source code is identical.

git clone https://bitbucket.org/kfj/pv lux
cd lux
mkdir build
cd build
cmake ..
make

You can also use 'make install', which will install lux into your msys2 environment, so that you can call it from the msys2 bash shell's command line - and also from Windows shells, provided you have your PATH environment variable set up so that the various DLLs used by lux can be found. Multithreading the build is usually a good idea (like, 'make -j4' for four threads), but bulding the rendering code is very memory-hungry, so don't overdo it.

When setting up the build with cmake on my mysy2 install, I have to pass the library path:

mkdir build
cd build
cmake -DCMAKE_LIBRARY_PATH=/mingw64/lib ..
make -j4

Getting the paths right for lux may take some twiddling, the DLLs are spread out over several folders. On my system, I added these folders to my PATH environment variable:

C:msys64usrlocalbin C:msys64mingw64bin C:msys64mingw64lib C:msys64clang64lib

With lux installed properly and the path set (on my system, lux is installed to C:msys64usrlocalbin), you can also select it as the default application to open various types of image files; I use it as the default image viewer, and also as the default application to open PTO and lux ini files. It's also nice to add lux.exe to the list of sendto targets (Windows+R, shell:sendto, add new entry by navigating to C:msys64usrlocalbinlux.exe), so that you can select a bunch of files you'd like to look at with lux in the explorer, and then pass the selection to lux via the context menu's 'send to' entry.

To build a windows 'bundle' - a portable version of lux, which has all the necessary DLLs and will run as stickware, you can use

.scripts/make_windows_bundle.sh my_bundle

This will create and populate a directory 'my_bundle' with the binary, the DLLs, source code and HTML documentation (the latter only if you have rst2html in your msys2 install). To share this bundle, it's best to zip the folder. This format is also available from lux' download page, look for a zip-file with a name containing 'portable'.

There is now a script to build a windows installer with inno setup, and I build the windows installers I distribute on the project's download page with it. The script is in scripts/lux_setup.iss. The resulting exe file will install lux on your system - and when lux was installed this way, you'll also be able to deinstall it. Note that the .iss script relies on the bundle folder.


Building lux on a Mac

If you only want to use lux on a mac and don't want to build it yourself, get a ready-made dmg form the Downloads section and move on to the chapter Using lux on a Mac! This chapter is about building it from source.

The procedure for building lux on intel macs and ARM macs is similar, but for ARM macs you should use a 'native build', use highway for SIMD code, and pass the CPU explicitly. For intel macs, you can use the defaults. I haven't yet found out how to build a binary containing both intel and ARM code - my mac is too old, the compiler on it won't build such binaries.

In November 2022, I had gotten hold of an old-ish iMac (Haswell core i5, running Big Sur), so I could tweak the mac builds myself. I started out as outlined below:

  1. install xcode command line tools only
  2. install macports
  3. get additional building tools like git and cmake if/as needed
  4. get packages sfml, freeglut, exiv2, vigra, highway and OpenImageIO from macports
  5. configure the build with cmake and build, e.g. like this on an intel mac:
mkdir build
cd build
cmake .. # !!! see below
make -j8

Note: currently I have to set the C++ standard to 14 in the CMakeLists.txt to get a successful build: vigra seems to have problems with C++17, but C++11 does not work for some other sources. C++14 as a compromise works okay, but really I'd like to use C++17 now.

Note: currently, OpenImageIO from macports seems to have an issue with OpenColorIO and does not result in a viable build. To avoid the problem, build OpenImageIO from source with OpenColorIO disabled, using a cmake statement like

cmake -DUSE_OPENCOLORIO=OFF -DUSE_PYTHON=OFF -DUSE_QT=OFF \
    -DUSE_OPENCV=OFF -DBUILD_TESTING=OFF -DUSE_OPENGL=OFF \
    -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_COMPILER=clang \
    -DCMAKE_INSTALL_PREFIX=/opt/local ..

This is the same 'minimal' build I use on Linux to disable the components which lux does not need.

  1. optionally package the binary, e.g.
cd scripts
./make_bundle.sh

In step five, for an M1 mac, use

cmake -DUSE_HWY_LIBRARY=ON -DEXTRA_COMPILER_ARGS="-mcpu=apple-m1" ..

If you make a package (.dmg extension) you can install it: first doubleclick on the .dmg to mount it, then drag and drop the lux icon to your Programs folder. If you distribute such a .dmg, your recipients may have to perform additional actions to be allowed to install the binary if it's not signed by a registered apple developer. The script to build mac bundles was provided by Harry van der Wolf, and I have modified it (and the plist template) only slightly.

Lux requires the permission to monitor the keyboard to function properly, if you don't grant this permission, 'chronic' user input (like keys held for some time for continuous effect) will not be recognized, crippling lux. When starting lux from the command line, you'll have to grant the same permission to your terminal app if that's not yet present. On first startup, you will also be asked for permission to access files on your disk, which is also necessary.

MacOS makes it hard for you to install the .dmg from the lux download page; it will warn you that the code may be dangerous, and without knowing the special key combinations, you won't even see the option to allow the installation regardless.

My efforts with the mac version aren't yet very 'professional', and there are issues I haven't found a good solution for, e.g. the fact that launching lux from the finder via MIME type associations is possible but lux won't receive the selected files (these are passed via an apple event, which lux is unaware of - it expects command line arguments instead). For the time being, please launch lux via it's icon or from the command line and take it from there! If you are a mac developer, maybe you'd like to help? Please get in touch via lux' issue tracker!

The functionality on the mac is otherwise just as on other platforms, only the startup in fullscreen mode does not work, so lux will start in a window. I also can't obtain the 'F11' key from the keyboard, so switching to fullscreen should be done with the window's full-screen control.

Builds on 'Apple silicon' were successful both with the master and imgui branches, but so far we haven't succeeded in producing a macOS bundle for Apple Silicon builds from the ImGui branch on Sonoma. The x86_64 version runs on Apple Silicon as well, albeit maybe not quite as fluidly as native ARM code, because it has to go through the emulation layer and can't use the i86 SIMD instructions.

****** Here are the original build instructions:

In December 2020, Karl Krieger succeeded in building lux on an intel mac. He gave me these specs of the target machine:

iMac Retina 5K (2017) with 3.4GHz Quad-Core Intel Core i5, 40 GB Memory, Graphics Radeon Pro 570 4GB

macOS Catalina 10.15.7.

Xcode Version 12.2 (12B45b)

The build process was straightforward and the changes to the code base were minimal. By now, macports offers all dependencies, so all it took was installation of xcode and macports, installation of the dependencies and running the makefile, with a few extra flags. Vc may or may not have to be built from source - the mac build is only done sporadically and I don't have precise information right now.

I now recommend using CMake for the mac build, using the master branch. To build on a Mac, supposing xcode and macports are set up correctly, do this:

sudo port install vigra -python38
sudo port install sfml
sudo port install exiv2
git clone https://bitbucket.org/kfj/pv
cd pv
mkdir build
cd build
cmake ..
make

Here are Karl Krieger's original build notes, starting out with the master branch, and without xcode or macports on the machine, so this may help users starting from scratch:

1) Install Xcode from Appstore

2) Install Xcode Command Line Utilities with command

xcode-select --install

3) Accept Xcode license with command

sudo xcodebuild -license

4) Install Macports using installer from web site

Then install lux dependencies from Macports (Python bindings removed from vigra build to avoid bloated installation).

5) sudo port install vigra -python38

6) sudo port install sfml

7) sudo port install exiv2

/opt/local is the default installation directory of Macports. Adapt following code for alternative specified locations accordingly.

8) In makefile add to compiler_flags the options

-Wno-deprecated-declarations -I/opt/local/include

9) In makefile add to pv_libspec the option

-L/opt/local/lib

The resulting binary has not yet been tested much - it was merely reported to run, be able to display an image, zoom and change brightness.

Some glitches which were noticed initially were ironed out in the meantime, but in-depth testing is still on the to-do list. What problems are likely? You may, for example, find that some keys don't work as expected.

Harry van der Wolf has kindly provided scripts to produce a macOS bundle, which can install a lux binary with no need for building lux from source. This is new and has not been used much - the relevant files are to be found in a sub-folder 'scripts'. I'll add more documentation on bundling lux when the process stabilizes.


Building 'native' lux

****** new build instructions for 'native' builds per March 2022

The 'native' branch will no longer be maintained. 'native' builds can now be done from the master branch with CMake. Please refer to the 'Building lux with CMake' section where 'native builds' are explained.



Building lux with std::simd

****** new build instructions for 'native' builds per April 2022

The std::simd variant now builds from the master branch, activate use of std::simd by configuring cmake with -DUSE_STDSIMD=ON. Then proceed as with the other builds. If you have the GNU C++ library v11, it should come with a std::simd implementation, otherwise you can pick the one from https://github.com/VcDevel/std-simd.

These two variants seem to be very similar, I think that the GNU code is derived from Matthias Kretz' implementation. The greatest drawback is the missing atan2 implementation, which slows down a lot of animations requiring atan2 for geometric transformations.

Note in 2024: lux now uses C++17, so std::simd should be present out-of-the-box.


This binary is running quite fast - ca. 30% slower than a Vc build. The lower performance is due to several factors: std::simd does not provide gather/scatter support, which is a large drawback because lux uses a lot of gather and scatter commands, which it has to emulate trying to coax std::simd into doing the right thing - which does not seem to be happening most of the time. Another factor is the absence of Vc's interleaved memory wrapper, which offers very fast processing of interleaved data using load/shuffle and shuffle/store combinations, which is also a feature lux uses if it's available. Finally, there were a few minor bugs in std::simd when I last tried it, and I had to program around them. My approach to use std::simd is to simply implement vspline::simd_type with std::simd to the extent this is feasible and delegate to emulation code where it's not. This is surely not the optimal way to code with std::simd, but it's quite straightforward and a good starting point. On the plus side, there probably aren't that many applications out there using std::simd already, so lux can claim to have been able to use it early on, and even to quite good effect. Another omission is a SIMDized version of atan2, which lux relies on.

As of this writing, the std::simd variant does not support CPU detection. A workaround is given in the discussion of the std::simd build in the 'Building lux with CMake' section. Manual ISA selection (like, --isa=avx2) works as long as the corresponding flavour was added.


Building a lux AppImage

To facilitate distribution across several Linux distros, I have opted to use AppImage. To build an AppImage which runs on many different, and also older, systems, it's recommended to build on the oldest still-supported Ubuntu - as of this writing, this is 20.04 LTS. I set up a VM with that system, and added the libraries needed for lux itself and the build; to create the best possible rendering code I built highway from master. With the necessare dependencies fulfilled, the build can be done with a build script, like this

cd /path/to/pv mkdir build cd build ../scripts/make_appimage.sh

The build requires two AppImages:

appimagetool-x86_64.AppImage and linuxdeploy-x86_64.AppImage

I had to apply a few changes - especially because I have to use quite an old libexiv2, which is not findable by CMake and uses toLong rather than toInt64, which newer versions use.

The resulting AppImage ran on the Linux systems at my disposal, but this is still a new development and has not been widely tested. I offer recent AppImages on the Download page.


Source Code

lux' source code is heavily commented, trying to be instructive. I've seen too many good programs with no or very little comments in the code, and I've always felt reluctant to work my way in 'the hard way'. Admittedly, lux' code is complex and, having worked on the code for years, I have come to use bits of lingo which aren't quite 'mainstream', but I've made an effort to provide ample comments - also for myself, helping me when revisiting code I haven't touched for some time. So, hopefully, for anyone trying to figure out how lux does what it does, looking at the source code isn't too daunting a proposition. My formatting style separates tokens with white space; I prefer that to the usual style which is more compact, but less legible. I use long, telling variable names, and I'm old school insofar as I use lower-case identifiers with underscores instead of camel-case which I dislike. I program bottom-up, so the highest-level code comes last, avoiding forward declarations if possible. I use RAII. An I tend to use fully qualified names. Have a look!

This program started out as a simple demo for vspline, my library for uniform b-spline processing. Then it evolved to become so useful and comprehensive that I felt it should become a project in it's own right - and with a different license. lux is licensed under GPL v3, please see the file 'LICENSE'.

vspline uses a functional approach to build pipelines processing 'xel' data: stuff like pixels, voxels, stereo sound samples - and also plain old 'single-channel' data like float. lux uses vspline's 'transform' routines to apply vectorized pixel processing pipelines to image data with automatic multithreading. That's it's essence: given a set of parameters, build a pixel pieline from functional elements, then use vspline::transform to 'roll out' the pixel pipeline to all pixels in the target image. vspline is a b-spline library in the first place, but it has very sophisticated rolling-out code as well, using automatic multithreading and hardware vectorization.

To facilitate building lux from source, I have added vspline's 'headers_only' branch as a subtree to lux' git repository. Access to vspline headers in lux will use these headers and ignore a system-wide installation of vspline. This ensures that there are no version conflicts and makes setting up quicker. And it makes it easy for you to peek into the vspline code ;)

File selection is done using tinyfiledialogs - this is a package of C code delegating file selection to the host operating system, and so far it's worked well for me on Linux and Windows and MacOS. I've added tinyfiledialogs to lux' git repo to avoid requiring a separate download, but this means that I may have missed a new release and the version im my repo isn't up-to-date. You can check thinyfiledialog's site if you want to make sure you are using the latest version. Recently it hasn't changed very much.


The zimt Branch

I have started a new project called zimt, which factors out some code from vspline into a stand-alone library. zimt - for now - takes over three parts:

To facilitate working with the zimt code in an application context, I have created a branch called 'zimt' in this repository. This branch has a copy of the zimt headers, it omits those vspline headers which aren't needed any more because thir functionality is now provided by zimt, and it has modified vspline code to use zimt. This is work in progress and needs some adapter code to handle the transfer from vspline to zimt and back, which is most apparent in the adapted vspline/transform.h which re-routes vspline's transform family of functions.

The greatest technical difference between vspline's transform routines and zimt's equivalent is that zimt does not use the scalar form of the eval member function in unary_functors. Instead, all data are processed with SIMD code, and underfilled vectors are supplied with additional lanes copied from 'genuine' lanes. This works with the vast majority of vspline code in lux, only reductions need special treatment so as not to include the 'stuffed' extra lanes in the final product. Such code is rare in lux, so I only had to add 'capped' evaluation code for a few functors who perform reductions (like the ones used for light balancing).

With the initial commit of the zimt branch, the resulting binary performs all right, roughly en par with the standard. Since the use of zimt is an internal detail and the zimt headers are included, working with the zimt branch should not be different from using the master branch - all cmake options are usable, all back-ends should work just the same.

zimt offers some new code which I hope to exploit in lux in the future. What's perhaps most interesting is the code for tiled storage, which should make it possible to handle data sets in lux which far exceed the physical memory, at the cost of reduced processing speed. This would be especially welcome for large facet maps, which take up a lot of memory, and for stitching jobs on them, which need even more. But this isn't there yet - so far zimt is only employed to take over some of vspline's work. There are already some example programs in the zimt repo using tiled images and processing tiled data, have a look if you're interested. There's also some code 'exercising' OpeiImageIO with zimt.

In return, lux is used as a test-bed for the new library, which helps in making sure that it does what it's supposed to do.


First Launch

All set up? Time to do a first launch of lux. Mac users, please refer to the chapter Using lux on a Mac, a bit further down!

Use a file manager to navigate to the lux folder (the one containing your clone of lux' git repository and your newly built lux binary - or the folder containing the ready-made binary). Doubleclick on the binary - the one labeled 'lux' or 'lux.exe' on windows. You'll be presented with a file selection dialog letting you select one or several files. Once you okay your selection, lux will try and display the first image, which may take a while, especially if the image file is large. If you've selected more than one file, you can 'go to the next image' by pressing Tab, and then Shift+Tab if you want to go 'back'. If you press Tab when the last image in the queue is displayed, lux will show you a file select dialog to alow ou to load more images. When 'tabbing' to a new image, lux will display the current image until everything is set up to display the next one - this may take some time and you should see what's going on in the status line. Since image import moved to using OpenImageIO, there is now a very large range of possible input, so the file-select dialog now shows you all files without a pre-selection for image file types. If you select non-image files, lux will silently ignore them.

You can add more images to the selection by calling the file select dialog again; just press 'F' or click on 'File/Open' in the GUI. These files will be queued right after the current image - if there were more images in the queue, they will be displayed after the newly added ones.

The default is for lux to start up in full-screen mode, and if the image file is large, you may see a splash screen until lux has set up what it needs to show you a first view. Once you see the image, try moving the mouse pointer close to the top margin. You'll see the menu appear, and from the menu, you can open lux 'panels' - embedded windows which give you access to most of lux' functionality. lux will record the placement and sizing of the panels from one session to the next. Pressing 'Escape' while panels are visible will close them all. Inside the viewing window, most work is done with the mouse and keyboard. Try and get acquainted with the mouse gestures - most of them are 'click and drag' gestures, optionally with 'Shift' or 'Ctrl' held at the same time. They provide the best way to smooth animations.

If you're working from a command line and lux has not been installed in your system's 'path', navigate to the lux folder and launch lux there - or use an explicit path to lux. There is a reason for calling lux with a path: it looks at it's invocation (to programmers: it inspects argv[0]), extracts the path from the first argument and tries to load it's default font from there (unless you're overriding that with a -w flag, see below). If lux has been installed on your system, just issuing the 'lux' command should work from everywhere, because the font is installed in a place where lux finds it.

Command line invocations can do much more than simply launch lux: you can 'tell' lux which images to load, override or specify projection and field of view, and configure lux' operation in many ways, please refer to the Invocation section. I am a command-line person myself, and saying that lux is a command line program is not totally off the mark - but with the new GUI, the need for passing command-line parameters has lessened - especially the 'general settings' and 'conditioning settings' now affect what was only possible from the command line previously, and there you can even choose whether you want the options to be applied only to the current image or to the entire lux session. Since this was a majour change and has not been used for very long, some options may not work as intended - please do not hesitate to raise an issue if you run into problems.

If you want a quick listing of command line options, try passing -? or --help. Ther is also lux_options, a dictionary-like hypertext describing all lux options in detail. The panels offering access to lux options use the same option names, so you can find more detailed information on options which the GUI offers by looking into that file.

Looking at 'general settings' in the GUI you may have noticed the wide text entry field labeled 'extra arguments'. This is a 'window to the command line world' - you can enter command line arguments which override or add to the current settings, as you would enter them on the command line (so, with the appropriate number of heading '-' signs). I admit this is an uncommon way of communicating with a program, but for CLI people it's a welcome shortcut to change something quickly without having to terminate and restart the program, especially when working with large images which take long to load.

The new GUI has a few nice features which were missing in the rather rudimentary old GUI (which is still available as 'legacy GUI' - press 'C') - among them the ability to cut and paste. This comes in handy when trying out stuff from the documentation. I also added a menu entry to get a 'guided tour' of Dear Imgui, look for 'ImGui Demo'.


Using lux on a Mac

If you have browsed the 'Downloads' section of my bitbucket repo, you have seen files on offer containing 'MacOS' in the filename, two for each lux version. One is for intel-based macs, the other for the newer macs with 'apple silicon', the M series of CPUs, like M1 or M2, which use an ARM-based architecture - hence the 'arm64' in the filename, whereas the filename for intel-based mac contains 'i86_64' instead. I haven't yet figured out a way to build 'universal binaries' which contain both versions in a single package.

These files are 'dmg' files, so-called 'mountable disk images'. To install lux (or any other program) from a dmg file, you click on the dmg, which mounts it and opens it in a Finder window. Now you need a second Finder window showing your 'Programs' folder. Next drag-and-drop the lux icon from the first folder to the second. This installs lux to your mac. After the installation, you can 'bin' the icon representing the mounted dmg - it's no longer needed. If you update lux to a newer version, you simply do the same again with the newer dmg.

Because I have not registered with Apple as a developer, the dmg files I can offer are not digitally signed. If you click on the newly-installed lux icon in the dock, you won't be able to run lux; macOS will not allow you to run a program this way for which it can't figure out where it came from.

If you decide to trust the dmg and run lux despite the warning, you have to launch it with a secondary-click on the lux icon. The resulting dialog will give you the option to execute it, and this is the way to get lux running, provided you explicitly permit it in the ensuing dialog. You only need to do it this way the very first time after the installation; later on, simply clicking on the lux icon will do.

There is another obstacle which some mac users have reported: I update my macs regularly to the very latest security patches and OS versions. If you don't do the same, your system may refuse to install a recent lux build, which was made on an up-to-date system. I don't know a workaround for this problem other than updating the mac in question. I have successfully installed lux built on an up-to-date Big Sur to a system running an up-to-date Monterey, so different OS versions seem to be no problem, but installing on a mac which hasn't been updated may fail.

When starting lux, the first thing you get to see is a file-select dialog, where you can navigate to a folder containing images and select one or several. Once you okay your selection, lux will start 'for real'. On a mac, lux is started in a window (on other systems, lux starts in full-screen mode), displaying a view of the first image. Now you're almost set - if you play with the view, after a while you will get a dialog asking for additional permissions: lux would like to monitor your keyboard and mouse even if other programs are running. To get the full lux functionality, you should allow that - again, only once after the installation. The ensuing dialog will close and restart lux after the permission was granted. Note that you may have to grant the permission again if you've updated to a newer lux version!

After these preliminaries, running lux on a mac should be very similar to running lux on Linux or Windows. There is one annoying difference: lux is not integrated properly with the Finder. You can navigate to an image with the Finder and ask for it to be opened with lux, but the next thing you'll see is a file-select dialog: the selection is not passed on to lux correctly. This is due to different selection-passing mechanisms in macOS and other operating systems: most systems pass a selection as a set of command line arguments to a newly-launched instance of the program. On macOS, the selection is packaged as an 'apple event' and passed to a newly-launched or extant instance of the program, which needs to decode this event and handle it. lux does not know 'apple events' or how to handle them, so it merely finds it's just been launched, and since it does not know of the selection, it opens the file select dialog. This is less of an issue than you may think; using lux' own file select dialog is just as powerful: it uses the system's file-select dialog.

One more thing about the file-select dialog: If you switch your view to full-screen (using the window controls), and then want to select images, pressing 'F' or clicking on 'File/Open' in the GUI, you won't see the file-select dialog: it's opened in a window, but the full-screen view won't 'make room' for it, and lux seems to 'hang' with the 'beach ball of death'. You need to 'manually' Command-Tab to the file-select dialog to make a selection. This is, again, due to subtle differences between the systems: The full-screen mode on macs works differently, and lux does not 'realize' it actually is in full-screen mode. This stops it from making room for the file-select dialog, resulting in the obscured dialog. After you've made the selection, you need to Command-Tab back to lux.

You can see that lux is not fully integrated with macOS, but apart from the few hickoughs I have listed above, it should behave just as it does on other systems. This is a bonus if you work on many different machines: apart from a few exceptions, lux is just the same on every machine, and you can just use all the mouse gestures and keystrokes you are used to, rather than having to re-wire your brain every time you use a different machine.

I am, altogether, pleasantly suprised by how well lux runs on macs - even on the older machines I have - a Mac Mini from 2014, running Monterey, and an older iMac running Big Sur. I configure lux for macs to start up in a window and to use 'automatic rendering quality', resulting in the view possibly looking a bit blurred when in motion (like, when panning or zooming), and with this setting, simply viewing images and panoramas seems to work just fine, even on the Mac Mini with it's dual-core processor. I realize that getting lux to run on a Mac is not as straightforward as one would wish for, but once you have it set up right and keep in mind the few shortcomings, I hope you'll get to like it!


Acceptable Input

What is acceptable input? lux can read a variety of common image files, like JPEG, TIFF, PNG and openEXR. The files are opened with OpenImageIO, which actually looks at the file content and so recognizes the format, the actual file name is not relevant. The precise set of image file types OpenImageIO can open depends on the precise set of OpenImageIO plugins installed on your system.

On top of image files, lux accepts it's own format for specifying sets of parameters and images, the 'lux ini file' format, which should come with the extension '.lux' - and PTO (PanoTools Optimizer) format, which is used by hugin and other programs from the Panotools family. lux only 'understands' a subset of the full PTO parameter set, but this set is growing and should cover a wide range of use cases already.

It's not a bad idea to make lux your standard image viewer by mime type association (which varies between systems) - and to also make it at least a choice as the program to use for PTO and lux ini files. This allows you to directly launch a lux session with one or several images selected in a file manager via the context menu. lux itself can also present you with a file select dialog, which should look quite native because it relies on tinyfiledialogs.

lux itself is unaware of the file and folder structure of it's host system. There are several places where lux accepts file names, but to lux, they are just strings. It will try and 'open' any string, and if it's not acceptable input, it will ignore it. Why so? Making assumptions about the file system breaks portability, unless there is a portable layer providing file system access. I'm making an effort to use as little external code as possible, so I've opted not to use such code - and wait for std::filesystem to become commonly available with C++17.

Because lux does not have a notion of the file system, you have to provide every file's path - either relative to the present working directory or absolute. It's easy to work with absolute paths; there are no other rules to consider. If you pass absolute paths, that's what lux will try to open. If you use lux' file select dialog, the path will be clear from the folder you've navigated to.

Relative paths are fine as well: If you're launching lux 'in' an image folder, just passing the file name without a path is fine: in that case, the filename in itself is a valid relative path.

On top of image files, lux also accepts 'lux ini files' instead of image files, provided they 'produce' at least one image - or contain directions for synoptic views of several images (see 'cubemaps' and 'facet maps'). Lux will now accept lux ini files only with the '.lux' extension.

lux ini files provide a handy way to bundle a set of arguments with an image file, because all other options in the ini file are also honoured. See the section on ini files further down. lux ini files are 'located' just as image files are, so if they contain no path separators, they'll be prepended with whatever is in 'path'. There is one last rule concerning locating of files: if a lux ini file is passed instead of an image file, the path to this file is used to locate the image file(s) it uses. Why the extra frill? Because you 'normally' would want 'stand-in lux ini files' to produce images from the same folder.

This mechanism gives you a simple way to set up, for example, slide shows, going beyond simply showing one image after another. You might, for example, have full sphericals starting up with 'autopan' set, or have several ini files for some larger image showing specific parts of the image by putting initial yaw, pitch and roll values into the ini files. Also, when going 'forward' or 'backwards' (Tab or Shift+Tab, respectively), if this 'arrives' at a lux file, the same sequence of arguments will be used again. It's like specifying a specific way to start looking at an image, rather than simply specifying an image's file name - like a sidecar file in a raw converter telling the program which settings to use. A common use is to 'wrap' an image file with specific information about how lux is supposed to treat it, which is especially useful for image files that can't have metadata. lux does use this feature when it produces openEXR output: it adds an ini file with metadata for the EXR file. When lux receives such an ini file as input, the image can be displayed with correct projection, hofv etc.. I find this makes openEXR format much more convenient to use. Note that with the switch to OIIO, I may be able to store metadata in openEXR files and avoid the side-car files, but I haven't implemented that yet.

A word of caution: try and keep it simple, even if you're free to do very complex stuff. Using lux files which read other lux files, setting 'path' several times - it's all allowed, but it can become hard to figure out what's going on precisely.

If, finally, all path manipulations and stand-in ini files produce a definite file name, this file name is tested for OpenImageIO viability. If that fails, lux will silently ignore the file and proceed to the next one.

Note that these tests are only performed when a file's time has come to be opened. You are free to pass anything, it only has to be accessible once lux tries to use it, and it will be processed with whatever options are set at that moment. So you can do trickery like producing images with a lux file and proceed to show them, because these image files don't have to be there when the lux file is processed, but only when the time comes to display them.

If you have trouble getting lux to show stuff, you may be better off launching it from the command line, because then you'll see error messages when things go wrong. lux won't show you dialog boxes or such if it can't go on - it will simply terminate after - hopefully - echoing something helpful to the console. Recently, I added a few message boxes for common errors leading to program termination, which can also give helpful hints if lux terminates unexpectedly. These message boxes display the C++ source file and line to help finding where the error occured, they don't mean that there is a bug! lux now also emits a dialog when a snapshot or other rendition would overwrite an existing image file.

lux ini files are also lux' 'native' way to specify 'synoptic' views composed of several images ('cubemaps' and 'facet maps'). This mode of specifying synoptic views is no longer well-supported: lux now uses PTO format for the purpose to make it compatible with software like hugin. PTO files are used by hugin and other programs from the panotools family to describe the properties of a set of images and how they can be combined into a synoptic view - like a panorama. In lux, their content is used to set up a 'facet map' of the contributing images, and only 'i-lines', 'k-lines' and 'p-lines' are scanned. What does that mean? i-lines in PTO format describe properties of the contributing single images, k-lines specify masking, and the p-line specifies the output. To simply view the content of a PTO file, the p-line is ignored - it's only looked at for stitching and exposure fusion jobs 'to PTO specification', which will be explained further on.

To put it more technically: lux relies on software like hugin for image 'registration'. The result of image registration is encoded in a PTO file, and lux can 'take it from there'. Other stuff which, e.g., hugin puts into the PTO file (colour space information, EMoR curve etc.) are ignored.

To help with 'grabbing' filenames from file managers, lux will accept file names with a URI file:// prefix and silently remove it. If you invoke lux with

lux file:///path/to/some/file

That' s equivalent to

lux /path/to/some/file

This will not work with all file managers, but if you 'copy' an image from, say, a dolphin window, and then 'paste' it to a shell window, you get the URI. I do this often, and I found it annoying to have to manually delete the prefix.


Configuring lux

When lux starts up, it looks into your home folder (gleaned from the HOME environment variable) to find a file named ".lux.ini". If this file is present, it's read as a "lux ini file" - a file holding lines in the form of arg=value (see above for more on lux ini files). You can use this file to configure your lux invocation to your liking. The file is read before command line options are processed, so if you pass command line options they can override options from the ini file.


Using lux on Linux

Some recent bug notices for lux on linux:

!!! bug notice: Attention Gnome Users:

There is a bug - AFAICT in 'mutter' - which affects full-screen operation: at times there is a grey bar at the top of the screen which is the same size as a normal window's header bar would be. The bug is known but so far it doesn't seem to have been fixed:

https://gitlab.gnome.org/GNOME/mutter/-/issues/2937

I have added code to work around the issue in lux. I noticed that SFML provides a Resize event stating the erroneous size when the bug happens, and if I detect this event, I re-open the window in fullscreen mode. If the bug happens several times in succession (which is rare) this produces a bit of flicker, but eventually, lux 'comes round' to the intended fullscreen mode, so the bug is reduced to an occasional nuisance while it's not fixed.

!!! end of bug notice

!!! bug notice: mouse cursor confined to part of screen in full-screen mode

I've now coded around this bug with quite good success but it seems it still occurs occasionally after switching lux to full-screen mode. One workaround is to press F11 twice and hope it's gone afterwards. Another workaround is to move the mouse pointer away from the top of the screen (to avoid having the menu on) - and, if any GUI panels are open, to close them. Then wait half a second until the mouse cursor goes off. After that, the entire screen should be reachable again. The bug tends to occur if you open an image and instantly move the mouse up ti use the menu.

!!! end of bug notice

On my system, I use lux as the default program for image, .lux and .pto files. The place where you can do this varies from desktop to desktop; when using KDE, you'll find it in the KDE system settings. Don't forget to write the entries like '/usr/bin/lux %F', so that all images selected are passed on to a single lux instance, rather than opening a separate instance for every image file. To specify that lux should be used for selections of a 'mixed bunch' of compatible files (like, TIFFs and JPEGs) may take some fiddling with the mime-type association software, but I managed to get it done with the KDE system settings.

If you navigate to an image folder and open a single image with lux - either with the 'open with' context menu option or with a double click on an image file for which lux is the default application, lux will take note of that image folder's path and display a subsequent file select dialog with this folder's content. File select dialogs always 'start out' with the last displayed image's path. Once inside the file select dialog, you can select any number of images of all supported types, and they will be queued to be displayed in the very lux instance you're currently running. This can help you work around the issue mentioned above: just start out with a single image, then use lux' file select dialog.


Sharing a Windows Binary

You can put lux.exe and the necessary DLLs, font file, READMEs, and license texts into a folder, e.g. on a USB stick, and lux.exe can then be run from that folder without the need for an msys2 environment. Packaging the files in a folder creates a portable application, just keep in mind to invoke lux with a path to the exe so that the font can be found. When passing on such a 'bundle', please keep in mind that the license of the Noto font requires the license file from the zip archive to be passed on alongside the font. Make sure you understand lux' license and add the relevant information. It's also a good idea to add an HTML version of this README which can be made easily with rst2html. Compressing the 'bundle' folder into a zip file creates a handy package to pass on to others. I now add information about the shared libraries lux relies on to the bundle to do them justice and offer a route to find them. This is a laborious task - lux literally links to hundreds of libraries, either directly or indirectly, and with the introduction of OpenImageIO, the number has grown considerably. Look at THIRD-PARTY-LICENSES if you require information about the shared libraries used by lux.

ldd lux.exe | grep /mingw64 | sed 's/.dll.*/.dll/'

Of course you can use ldd on unixoid systems as well ;-)

I have added a bash script to the scripts folder to create a lux 'bundle'. This script creates a folder and copies all the relevant bits into it, adding a freshly made HTML version of the README and the currently used sources, plus licensing information and default font. The resulting folder is 'stickware': you can simply copy it around (like, on a USB stick) and launch it from the stick by clicking on lux.exe. Adding all source code is the easiest way to make sure that every user can access the source code: just give it to them. You won't need to write complicated explanations about how they can go about accessing it from some website if you simply pass it on expressis verbis.

I'm now distributing such 'lux bundles' on https://bitbucket.org/kfj/pv/downloads/ They are named lux_for_windows*.zip whereas the files ending in _setup.exe contain binary installers to install lux to a windows system.


Displaying Several Images in one Session

The first, and most obvious, place to specify several image files is the command line. Here, all 'trailing' arguments - those arguments which are not options - are taken to be image file names. An alternative way of passing image files on the command line is to use '--image=...', which has the same effect. If you pass more than one image (or .lux file or .pto file), the file names are queued to be displayed later on.

During the session, pressing 'Tab' will go to the next image, and Shift+Tab will go to the previous one. lux also offers a slide show mode which proceeds to the next image after some time.

All arguments in the initial invocation which are not image files (or .lux or .pto files passed as images) will be used for every view in the whole session.

The second way to pass image files is via a pipe. If you pass a single '-' at the end of your invocation, lux is run in 'streaming mode': As soon as it runs out of filenames from other sources, it will try and take image filenames from it's standard input until it's exhausted. Again, all (non-image) arguments from the command line will persist and will be used for every image in the session. 'streaming mode' is a good way to sift through entire folder hierarchies looking for images, on linux you can do stuff like:

find /home/me/images -name '*.jpg' -print | lux -d2 -

Which will use lux to show a slide show all JPEG images under /home/me/images.

From 'within' lux, you can add more image files to the file queue by using a file select dialog. This is launched by presing 'F' or clicking on File/Load in the GUI. Note that this method may fail for (very) large amounts of files; the file select dialog is realized with tinyfiledialogs, which in turn uses system-specific tools which may or may not be able to accommodate a large number of files. As of this writing the limit is 1024 files. The set of files you select is inserted at your current position in the file queue, so if you have already looked at some images and have a few more lined up, you'll see the newly selected ones next, followed by the ones which were still 'pending' when you used the file select dialog. There is no way to remove images from the file queue, and no way to 'skip' files.

The file dialog is also handy to 'bounce' synoptic views: If you inspect a PTO file with lux and are satisfied that it is processed correctly, you can make lux create output to the specs in the p-line by pressing Shift+E. Once the output is ready, just open the file select dialog and select the most recent image, which is the output lux has just made - the 'bounced' PTO file. This is a single image file and rendering it is easier for lux, so you'll get smoother animations.

There is an additional way to add files to the queue: the GUI offers a text field to specify 'override arguments'. There, you can enter any set of valid command line arguments. When you commit, normally the image is displayed again using the additional arguments on top of the 'persistent' ones. If there are any images in this set of 'override' arguments, they take precedence and are displayed next. If you don't enter anything into this field and commit with Return, this is like pressing F1: the show-again action is triggered anyway - see the documentation for the F1 key for an explanation.

When lux is made to proceed to the next 'cycle', which may show the next image in the file queue or the same image again with modified parameters, lux tries to 'recycle' it's interpolators, because they constitute expensive assets: the data were read from disk, which is slow, and the image pyramids were built, which also takes a good deal of time. So if the interpolators can be recycled, this is a really welcome resource-saver. This implies that some scenarios can be handled very efficiently: if you display a series of ini files standing in for images, then, if they all refer to the same image(s) and fulfill a few more compatibility constraints, the interpolators can be reused each time. And most of the time, re-playing an image with modified parameters can also be done with recycled interpolators and therefore much more quickly than a 'start from scratch' would take. Keep this in mind when producing series of snapshots with controlled parameters (like, with increasing initial_yaw value): you're better off using a set of ini files stating image file and parameter set for each snapshot, and passing all the ini files to lux, because the interpolator will be reused every time. If you start lux afresh for each snapshot, the image will have to be loaded from disk every time.


Image Metadata

For panorama viewing, it's best to feed lux with images containing metadata.

Lux uses it's own set of metadata describing image projection, field of view and cropping, and all images lux produces - as snapshots, stitches, etc. - are supplied with these metadata if the image format supports metadata. Why yet another set of metadata? I found two types of metadata used by hugin, and they both don't suffice to describe all images well enough to display them correctly. The first type is hugin's 'UserComment' EXIF data, which don't supply cropping information. The second is GPano Photo Sphere metadata, but these are only for spherical panoramas. Lux metadata can be used for all projections and support cropped images.

So let's start out with lux metadata. Lux writes and reads a set of XMP tags:

  • lux_version - version of lux which has produced the image
  • cropping_active - whether the image is cropped or not (1/0)
  • uncropped_hfov - hfov of the image without cropping (in degrees)
  • uncropped_vfov - vfov of the image without cropping (in degrees)
  • projection - rectilinear, cylindric, spherical, stereographic or fisheye
  • uncropped_width - width of the uncropped image in pixels
  • uncropped_height - height of the uncropped image in pixels

if cropping_active is set, these values should be present as well:

  • cropped_width - width of the cropped area
  • cropped_height - height of the cropped area
  • crop_x0 - horizontal start of the cropped area (counting from zero)
  • crop_y0 - vertical start of the cropped area (counting from zero)
  • crop_x1 - one after horizontal end of the cropped area
  • crop_y1 - one after vertical end of the cropped area

The x and y values also can be used to infer the size of the cropped area in pixels, it's:

cropped_width = crop_x1 - crop_x0 cropped_height = crop_y1 - crop_y0

So the data are slightly redundant.

lux also supports hugin-generated 'UserComment' EXIF data, which have projection and field of view information, but there's an issue with cropped images and the field of view information gleaned from the UserComment EXIF tag may not be correct. The problem is with 'cropped' images only. If there is no cropping, the values are okay. Lux looks at the UserComment tag and, with the given projection, checks the hfov and vfov for plausibility. If the values don't seem plausible, lux assumes that the image is cropped, emits a warning, accepts the hfov (even though it may be 'off' due to cropping) and calculates a suitable vfov. For cropped images, the hfov found in the UserComment tag is usually not as far off the mark as the vfov, so the resulting display usually looks 'acceptable'; anisotropic distortion is avoided.

What's preferable is GPano or 'Photo Sphere' metadata, which - as far as I know - can only be used for spherical panoramas, but work very well, also for cropped images, which is important to position the horizon just right. Do visit the site, they provide good explanations. The problem with these metadata is the limitation to spherical images, which is even inherent in the choice of values: there is no information about the fov of the uncropped image - because it's always taken as 360X180, a full spherical.

If you have spherical panoramas lacking metadata, you can add these metadata quite easily by using exiftool. lux looks only at a subset of GPano metadata:

For spherical panoramas, hugin will also embed these metadata in your images if you tell it to do so - it was optional when I last checked. Note that the metadata may be lost 'further down the line': If you postprocess your panoramic images with image editing software, there's a good chance this software will not carry all metadata over to it's output. Some image editing software can be made to transfer metadata to it's output, some programs even allow you to specify which metadata are transferred - but some just remove some or all metadata. Snapshots done with lux will contain a 'hugin style' UserComment EXIF tag, which gives the projection and horizontal field of view, and also lux' own flavour of metadata, so such snapshots should display correctly when loaded again with lux. This may at times have surprising effects: If you take a snapshot from some panoramic image, the result normally gets a 'rectilinear' projection tag. When you open this with lux, it will automatically be displayed with perspective correction, as befits a rectilinear image file. You may instead have expected it to be displayed as an 'ordinary' image with no projection information. You can specify the projection on the command line or set it in the 'geometry' panel in the GUI to override the metadata if this happens - or change the metadata with toolslike exiftool. Snapshots done to a PTO file's p-line may have cropping. The lux metadata will reflect that, but the hugin UserComment metadata won't.lux does not carry over any metadata from the images it displays - if you want such a transfer to happen, you'll have to rely on external tools, like exiftool. lux will not write GPano metadata to output images.

The above implies that when producing panoramic images to be viewed with lux, it's best to produce sphericals with GPano metadata, because they will automatically be displayed 'just right', and they should also work with other panorama-aware viewers.

If you have files without metadata or the metadata don't suit your intended display, you can also 'wrap' your image file in a lux ini file. If you pass such a file instead of an image file, the result is that the options in the file are set just for the occasion, before displaying the image. The method is explained in more detail in the 'Invocation' section.

Keep in mind that you can do without metadata - you just have to supply projection, field of view and offset data on the command line or set them in the GUI. Metadata are a matter of convenience. If you give away lux with your images so that users have a viewer to fully appreciate them, you'll want the images to have proper metadata, or you want useful ini files to go with the images. lux ini files give you total control, but they're specific to lux. You may want to have both an ini file and proper metadata.

If you want to say explicitly that lux should try to glean projection and field of view from metadata, you can pass --projection=auto.

There is one more EXIF datum lux always looks at: the EXIF orientation tag. lux now handles all EXIF orientations, I've recently added handling of 'flipped' orientations when switching ti OpenImageIO.

What if you're looking at images 'straight from the camera' which have no panorama-specific metadata? Up to lux 1.1.4, you had to manually provide the image geometry for these images on the command line, passing projection, hfov etc.. From 1.1.5 onwards, if nothing is passed on the command line and lux can't find lux, GPano or UserComment metadata, it will assume it's a rectilinear image 'straight from the camera' and figure out the image geometry from EXIF data the camera has written into the image file. This is quite an involveld process looking at several EXIF tags (have a look at the code in pv_metadata.h), but if it succeeds, chances are you obtain good image geometry data. lux will show the images accordingly, giving you automatic perspective correction, whereas up to lux 1.1.4 you would have seen a 'flat' display. Now you may find this new mode annoying, and you can switch it off by passing rectilinear_as_mosaic true, which will display all rectlinear images 'flat'. If the images aren't very wide-angle, this won't look much different, but once hfov goes beyond 60° or so, the distortions near the image's short edges become quite noticeable, and if you take out snapshots from such regions from a 'flat' projection, the result is badly distorted. With automatic perspective correction, though, you get snapshots which look as if they had been taken from the same position with a longer lens. This is especially important if you intend to add such snapshots to a panorama.

Your camera images may not yield correct image geometry. There are several causes for that: the camera hasn't written metadata which lux can use because the image was taken with a lens the camera doesn't know about (like my Samyang fisheye on the EOS) - or because some special mode of operation was used (like the panorama mode on smartphones). In both cases, one simple solution is to manually add a UserComment EXIF tag with the missing information: if there is no cropping, this tag should produce a correct display. lux will accept a UserComment tag with projection and field of view information if it contains a part "Projection: XXX" where XXX is your projection (Equirectangular, Rectilinear, Cylindrical, Stereographic, Fisheye or Mosaic) and a second part "FOV: XXX" where XXX is your horizontal field of view in degrees, or optionally, an expression of the shape "HHH X VVV" where HHH is the horizontal field of view in degrees, and VVV is the vertical field of view in degrees.

Lux now has a method to recognize panoramas done with, e.g., a smartphone's panorama mode. Up to 1.1.4, such panoramas weren't recognized as such unless you 'manually' added metadata. Now, lux looks at images with an aspect ratio greater than 2:1 lacking panorama-specific metadata and considers them possible panoramas. If there are sufficient metadata to figure out the geometry of the optical system, lux assumes cylindric projection and a vertical field of view corrsponding with the wider diameter of the sensor, which is typically just about right. The device has to provide focal length and crop factor (or equivalent focal length for a 35mm sensor) or metadata to infer them. Some - especially older - devices won't provide the necessary information, and then the heuristic won't be triggered and you get a flat stripe, just as before, until you 'manually' provide projection and hfov. The main problem with panoramas done this way is that they often don't have the horizon coinciding with the middle horizontal - usually because the image was taken with the camera pointing slightly upwards or downwards. The resulting image, viewed 'flat', still may look quite convincing, but especially city-scapes with their many straight lines show a falsely put horizon clearly. You can fix this by moving the horizon up or down (H or Shift+H) until it's in the vertical center.


Hardware Considerations

lux is mainly CPU-based and wants a reasonably powerful CPU; you want a multi-core system with vector units, and at least chipset graphics for smooth animation. If you can live without smooth animation, just about any CPU should work. lux can adapt to your system, so even less powerful systems can make an effort to give you 'decent' animation quality, but if there isn't enough processing power, memory bandwidth etc., there is only so much lux can do. You'll have to try it to see if it works for you.

The rendering code is quite complex, and I have had reports of 32-bit systems failing to compile it due to lack of addressable memory. I'd like to keep this option open, but this will require serious refactoring.

If you're working on very old hardware, using lux will be 'no fun', even for simple tasks like looking at 'ordinary images' - the 'time to first light' will be long, animations will stutter. Further down, you'll find a few things you can do to help with slow hardware, but - again - there's only so much you can do. You might think of lux as workstation software, and processing large images - and especially large image sets - requires a lot of memory and processing power. lux does not limit it's memory consumption automatically. It's coded 'optimistically' (some might say 'naively') to allocate whatever it wants and assume to get it, so if the physical memory is exhaused, the system will start swapping, everything will slow down to a crawl, and if even the swap space is exhausted, it will crash eventually.

To give you an idea what's a suitable system: Mine is a Haswell core-i5 with four physical cores and AVX2, and I use the chipset graphics. For full HD display at 60fps, animation is fluid most of the time with bilinear interpolation (--fast_interpolator_degree=1, the default). Animation quality depends on many factors: the source image's projection, it's size, the specific section you're viewing, linear vs. sRGB processing - it's impossible to make blanket statements. You'll find out by trying. Still image quality should exceed many other viewers - per default you get a (still) image rendered with area decimation or cubic b-spline interpolation, which is very nice and smooth even at high magnification, and animation is done using bilinear interpolation. Insufficient processing power will show as stuttering animations; still images are usually rendered fast enough to appear seemingly instantly. A lot of processing power is also invested into building lux' interpolators, and if the system doesn't have enough umph, startup may be slow and showing animations right at startup may stutter until the interpolators have been set up properly. The interpolators do also need a fair amount of memory to be fully operational; memory shortage will lead to suboptimal results, especially when the system starts swapping.

There are some tasks which are especially CPU-hungry. Using full alpha channel processing is expensive, and synoptic views - especially facet maps with feathering - are also hard to do in real time. Using target projections other than the default rectilinear is also often slow.

Synoptic views and stitching are especially memory-hungry. On my system (which has 16 GB of RAM) I can just about stitch 10000X5000 full sphericals when I use both --build_pyramids=no and --build_raw_pyramids=no; beyond that size stitching takes much longer because the system starts swapping.

There are a few flags which can help speed up rendering - at a cost:

-l will not use linear RGB internally, saving the conversion to sRGBA
-f0 will use nearest-neighbour interpolation for animated sequences
-q1 will use bilinear interpolation for still images
-a will ignore an image's alpha channel
-m [factor < 1] will render smaller images and pull them up with the GPU
-s will create less sophisticated interpolators and work on raw data instead
--squash=... will discard high-resolution imagery

Reducing animation quality is, in my opinion, a good compromise to deal with insufficient CPU power, but if you like long pans, you may disagree ;) lux now uses 'automatic rendering quality' as it's default setting, so if you want to set a specific rendering quality for animated sequences, you have to switch the automatics off.

--squash=1

or

--facet_squash=1 (for facet maps)

this will discard the image data at the highest resolution and work as if you had started out with each image 'squashed' to half it's original extent (or even more, you can try 'squashing' with values greater than one). If the input is photographs, this may even go unnoticed - modern sensors often use 'overkill' pixel counts and the image is so blurred that halving it doesn't make a difference. Squashing is the equivalent of 'pixel binning': a squash factor of 1 bins four original pixels into one.

If, on the other hand, you have a powerful workstation, you can try and improve moving image quality beyond the default. Try this:

-f2 will use a quadratic b-spline for animations of enlarged views
--decimate_area=yes will use area decimation for less-than-1:1 animated views
-m [factor > 1] will render larger images (supersample) and compress with the GPU

When you're processing synoptic imagery with lux - like a panorama specified in a PTO file - all image data are held in memory. If there are many, or very large, images in the set, lux allocates a lot of memory, and eventually the system starts to swap and performance suffers badly, up to a point where your system may end up stuck. And if you 'stitch to PTO specification' (press Shift+E) even more memory is used, because all the partial images for the stitch are rendered in RAM. Panoramas with stacks are especially memory-hungry. To help with lack of memory, you can reduce the amount of RAM used by the interpolators by specifying --build_pyramids=no and --build_raw_pyramids=no together. If this still uses too much memory, you can try and use facet_squash, which is a viable option if your target image's resolution is smaller than the source images' resolution.


Graphical User Interface

!!!! NEW GUI CODE !!!!

After developing a new GUI for lux for some time in the 'imgui' branch, I've now convinced myself that this works well on all platforms I support and that it is superior to my own GUI - the 'lux legacy GUI', a strip of buttons at the top of the screen. So I've now merged the imgui branch back into master. The old GUI is still available (pass --legacy_gui=yes on the command line, or press 'C' to toggle between the old and new GUI) but it may go eventually. What I haven't yet managed is to write documentation for the new GUI - it's mostly self-explaining via ample tool tips, though. The mouse and keyboard commands are the same as before, only the graphical elements are new and consist of several 'panels' giving access to common settings and options.


Invocation

Initially I only used one-character short arguments, like -x or -y..., but in late 2019 I finally implemented use of long arguments starting with the customary '--'. Both long and short arguments follow 'standard argument syntax', but on top of that 'long' arguments can be passed with this syntax:

lux --option_name=option_value ...

instead of the customary

lux --option_name option_value ...

I prefer the former style, because it implies an assignment, and I use it in this documentation. Note that when using this syntax, there must not be white space on either side of the '='. In lux, all long arguments must be passed with a value. ':' instead of '=' is still accepted, but deprecated.

In case of boolean flags, you must pass "yes" or "no" as the value - or one of a bunch of similar strings like "on" or "off" or '0' or '1'.

This is handy with 'argument override' from the GUI: If, say, flag x sets something to true, which flag could you use to reset it to false? With long options, the problem does not arise, you just pass =no.

Together with the long arguments, I implemented (slightly) better error messages for wrong arguments, and I also introduced a simple version of initialization file ("lux ini file"): a file which simply contains lines with name=value statements, where 'name' is the name of a long option, and 'value' it's intended value. Apart from that, any line in an ini file starting with # or white space is treated as a comment, allowing for commented parameter sets. This can also be used to specify synoptic views, but PTO format is better-suited for most purposes and less verbose. Some PTO features are not even available via lux ini files. On the other hand, some lux features (notably cubemaps) need to be specified with .lux ini files.

Argument processing can become quite complex, but there is one simple rule: Later arguments take precedence over earlier ones. This allows you to 'override' any previous arguments by simply specifying what you want after the initial values have been seen. This is useful when using lux ini files instead of image files: there, you can simply specify the options which should be used for the image you intend to show, and they will override any previously given ones. The 'mightiest' override is 'clear'. If you pass that, it's equivalent to setting all options to their defaults, just as if you'd started lux freshly without any arguments at all.

Here's a list of lux' command line parameters. The headlines give you the short and/or long option name plus the expected argument type. One reason I started using long arguments is that I simply ran out of short ones which I could memorize. Newer features now often don't have short option names at all. Please try and don't be daunted by this very long (and admittedly not very well-structured) list. I try and document every little feature, but lux is just a one-man show and it's demanding to write top-notch docu on top of good code. Again, please bear with me. If you prefer a 'lexical' approach, please refer to the text lux_options - the text following here tries to start with the most important options.

The set of options which lux takes are derived from processing 'options.h', which is just about readable: the macro name defines the option's type, the first parameter defines the option's name, the second one the default. For 'one_of' options, the remaining arguments list allowed values. You can get a listing of allowed long arguments and their defaults by invoking lux with -? or --help.

This section about options is written so that important options appear first, and thematically related options are kept close together. lux presents it's options so that the option's name relates to a feature, and the option's value to the desired state of that feature. So there are numerous options which are set to 'no' by default. I find this makes handling options easier, because you can rely on this simple rule, rather than having to memorize options which eventually force you to use double negation (tell me quickly: what does --notyes=no mean? ... you get my drift. in lux it's --yes=yes). Only some short options do negate the default, even if the default is 'no'.

-p <string>, --projection=<string>

  This option tells lux the projection of your (single) source image.

  You'll only need to pass this option if your image does not have the
  projection in it's metadata - images made by hugin do, for example,
  have metadata which will usually work. lux-generated images have
  suitable projection metadata as well. But without projection metadata,
  you *must* pass the projection, and if it's not a 'flat' image (mosaic
  projection), you must also pass the field of view, that's why I put this
  option first. Even if your image was initially made by, say, hugin,
  subsequent processing with image processing software may destroy the
  relevant metadata, so keep that in mind.

  you can pass the 'long' name or a single-letter abbreviation:

  spherical or s: spherical (equirectangular) images, up to 360X180 degrees

  cylindric or c: cylindric images up to 360 degrees wide

  rectilinear or r: rectilinear images

  stereographic or g: stereographic images (like from some fisheye lenses)

  fisheye or f: 'ordinary' fisheyes

  Finally, there is a mode for 'flat' images, like maps or mosaics. This mode
  does not use a field of view and implies a 'target projection' which is
  the same. pass 'map', 'mosaic' or 'm'. If you don't pass a projection,
  and the image has no metadata, this is the default - lux will show your
  image without any geometric transformation.

  So you can see that lux supports the most common types of panoramic image
  projections using a *single image* for the data. The two most common
  projections for panoramas are spherical (also called equirectangular)
  and cylindric.

  When specifying rectilinear projection, you might expect to just get an
  'ordinary' view with no geometric transformations, but lux applies perspective
  correction, so if you have a rectilinear wide-angle shot and zoom to a part of
  the image near the border, you'll get the corrected view. If you want the
  ordinary 'uncorrected' view, use 'mosaic' or 'map' (m) projection instead.
  Finally, lux also supports 'ordinary' fisheye and stereographic projection.
  Full 360 degree fisheyes can be expensive to render when showing the part
  of the image 'opposite' the center, resulting in stutter.
  Take note of the 'stereographic' projection. This is not such a common
  panorama format, but some fisheye lenses - notably the popular Samyang
  stereographic fisheye - produce images in this projection. If you use
  such a lens, here is a handy way to view the images taken with it and to
  extract rectilinear snapshots. even some lenses which you wouldn't
  suspect to be stereographic seem to be of that kind: my Canon Powershot
  G9X produces wide-angle RAW images which are best viewed as stereographic.
  You can also pass 'no' projection explicitly, or say explicitly that
  you want lux to figure out the projection automatically, by specifying

  --projection=auto

  lux now also supports *displaying* in different projections, see
  --target_projection. This feature comes in handy when doing stitches
  with lux, ad it's also a lot of fun - you can use it to produce
  'little planet' views, and the automatic periodization of full spherical
  images is also fun if you zoom out far enough to see the source image
  repeated like a wallpaper pattern...

  lux now does support 'cubemaps' and 'facet maps' (documented in separate
  sections) which also 'count' as projections. These two 'projections' should
  only be used in ini files, and synoptic views gleaned from PTO files will be
  'facet maps' in lux lingo.

-h <real>, --hfov=<real>

  specification of the image's horizontal field of view as angle in degrees.
  This argument is mandatory (except for 'mosaic' projection), since there
  is no sensible default for it. For full spherical and cylindrical images,
  you'd pass -h 360. If the image has appropriate metadata, you don't
  need this argument unless you want to override the metadata.

-v <real>, --vfov=<real>

  specification of panoramic image's vertical field of view as angle.
  This is rarely needed unless you need to use -y as well. Keep in mind
  that you may pass viewing angles as you please - you're free to pass
  angles which have nothing to do with the image at hand: this is *not*
  an error. So passing 'wrong' hfov or vfov will result in 'unreal'
  results rather than causing an error. But most of the time, you'll
  want a vertical field of view which is derived from the horizontal
  field of view and the projection, which is what happens per default
  if you don't pass -v.

-x <real>, --horizontal_offset=<real>

  sets horizontal offset of the image data from the zero
  degree position of the full 360 degree sphere. in degrees.
  So, if you have a section of a spherical panorama 300 degrees wide
  which is taken from the center of a full spherical, you'd use -x 30
  to display the section as the full spherical would be displayed.
  This is similar to GPano's CroppedAreaLeftPixels, but uses degrees.
  Using degrees here makes the parameter usable for all projections,
  whereas using a pixel value is only possible for projections where
  a 'full extent' exists.

-y <real>, --vertical_offset=<real>

  ditto for vertical offset. This value is important for panoramas which
  don't have the horizon in the vertical center position. The automatism
  in lux assumes that's where the horizon is and sets the y value to half
  what's left after subtracting the vertical field of view from 360 degrees.
  But if the horizon is elsewhere, you must use an appropriate y value - or
  manually correct the horizon position using the H key. This only
  affects images where a misplaced horizon is possible: full sphericals,
  for example, have no 'spare angle' left to pass to -y unless you 'cheat'
  by passing a smaller vfov.
  This is similar to GPano's CroppedAreaTopPixels, but uses degrees.
  -y is good to show 'little planets'. If you have a full spherical, try
  passing -ps -h360 -v90 -y180, then go the the nadir (PgDown) and zoom out.
  Using degrees here makes the parameter usable for all projections,
  whereas using a pixel value is only possible for projections where
  a 'full extent' exists. Consider cylindrical panoramas: their 'full vertical
  extent' would be infinite. Using degrees avoids this problem.

--initial_yaw=<real>
--initial_pitch=<real>
--initial_roll=<real>

  set the initial orientation of the view. Note that there are no checks on
  these values: they may land you 'outside' the source image. Use with care
  when your content is not a full spherical. Pass the values in degrees.
  When pressing Return, this is the orientation you'll return to. This option
  combines with the effect of

--auto_position=<yes/no>

  when on (which is the default), lux tries to find a 'good' starting position:
  for example, with full sphericals, it will start so that the left margin
  of the image data coincides with the left margin of the view. When off,
  lux will start out with image center and view center coinciding. The
  three initial Euler angles will go 'on top'. So if, for example, your
  image is a full spherical with north in the center and you want to
  show the view to the west, you'd use

  --auto_position=off --initial_yaw=90

--target_projection={rectilinear|spherical|cylindric|fisheye|stereographic}

  Sets the projection *of the display window*. The default is 'rectilinear',
  an image as if you were taking the current scene with a rectilinear lens.
  I've added four other projections - not so much to be used for viewing
  images, but more to provide easy reprojection. Let's say you have a full
  spherical with a wrong horizon or a center you don't like (might be east
  instead of north). To reproject, first load the image into lux like this:

  lux -W -ps -h360 -H360 --target_projection=spherical --window_width=1000 \
     --window_height=500 --snapshot_magnification=6 my360.jpg

  Next, move the image around until your horizon and center are correct.
  Then do a snapshot. The snapshot will be a 6000X3000 full spherical
  with the corrected horizon and center. Why the window_width and
  window_height? To get the 2:1 aspect ratio for a full spherical.
  The snapshot magnification increases the snapshot from the window's
  measly 1000X500 to 6000X3000.

  You can of course simply play with this feature, one nice and probably
  unexpected thing you can do is zoom out to get a field of view greater
  than 360 degrees for target projections where this can be interpreted
  meaningfully, like spherical or cylindric: You'll see the image repeated
  to it's 'periodized' form. Now start autopanning that and play with the
  cursor keys... :D

  Another thing this feature is good for is extracting stripe panoramas
  for prints and banners: typically you want to cover a wide horizontal
  field of view, which produces strong distortions near the edges or
  can't be done altogether - unless you switch to cylindric or spherical
  target projection.

  There are situations where the single images take 'long' to compute
  (like, 100ms), especially when your source material is facet maps.
  You won't get fluid animations then, unless you degrade rendering
  quality a lot.

  target projection is an important datum for stitching jobs, and oftentimes
  you'll want a stitch to use, say, spherical projection. lux can use
  the information in a PTO file's 'p-line' to stitch to the specification
  you set in a stitcher like hugin (use, e.g. 'Shift+E'), in which case
  the target projection doesn't have to be specified on the command line.

-H <real>, --hfov_view=<real>

  horizontal field of view of the viewing *window*. Selecting a small
  value here zooms in, a large value zooms out. Default: 70 degrees.
  This only sets the initial value, you can zoom in/out later.

-W, --fullscreen=<yes/no>

  If you pass -W or --fullscreen=no, lux will start in a window (default is
  to start in full-screen mode) If you don't specify --window_width and
  --window_height, you'll get 0.8 times full-screen initial size, But once
  you're running in a window, you can resize it any way you like. While resizing,
  the image may briefly flicker, show black areas or be slightly distorted, but
  it will adapt to a changed window size within a few frames. When the window size
  changes, the zoom factor is - roughly - held. Currently there is an issue if
  you start out with a very small window and enlarge it - if your view goes
  funny, just press F11 twice to go full-screen and back.

--window_width=<integer>
--window_height=<integer>

  Sets the size of the viewing window when fullscreen mode is off. Your
  window manager may interfere if these values are large and limit the window's
  size so that it 'fits in' with it's window frame and other GUI elements
  like a task bar. Keep this in mind when passing fixed window extents to
  create snapshots of a fixed aspect ratio. Yo want to avoid the window
  manager's interference. To reiterate: you control the aspect ratio of
  snapshots, stitches, fusions etc by the shape of your window, and their
  size by a factor (snapshot_magnification) which is multiplied with the
  window's size.

--gui_extent=<integer>

  this parameter is given in screen pixels and fixes the size for the
  entire GUI bar. If not set, or if zero, or a negative value is passed,
  gui_extent is set to relate to the *height of the desktop*: If the desktop
  is 1080 pixels high, the GUI stripe will be 1920 pixels wide. This parameter
  is for users with extended desktops and allows them to 'shrink' the GUI bar
  to coincide, e.g., with the width of one of their screens. This argument
  sets the initial value, you can change the size by Ctrl+Mousewheel while
  the GUI bar shows.

--show_status_line=<yes/no>

  switches the status line on or off (default is 'on'). You can also toggle
  the status line on/off with the 'V' key.

--metadata_query=<string>

  Adds a metadata query key to the list of queried metadata for the status
  line. This is a vector field, so you can pass this option several times,
  but currently the number of queried metadata keys is limited to ten (0-9).
  An example: pass --metadata_query=Exif.Photo.DateTimeOriginal to obtain
  the original date and time. If you use --metadata_format="%n  %0"
  at the same time, you'll get the filename and date/time in the status
  line. For now, queries are for Exif tags only, and only for those
  understood by libexiv2, as listed in https://www.exiv2.org/tags.html
  The conversion to the displayed string is left to libexiv2's toString()
  function and can't be influenced via lux. If the specified key has no
  value assigned to it, it will be displayed as '---'.

--metadata_format=<string>

  Format string for metadata display in the status line. This is new and
  still experimental. Pass a format string where %n will be replaced by
  the current filename, and %0 to %9 will be replaced by the value gleaned
  from querying they corresponding metadata_query entry (see above).
  %h yields the image's hfov, %p it's projection and %P the viewer's
  target projection.
  If you omit the format string, metadata specified with '--metadata_query=...'
  will still be displayed: the value will be prefixed by the key and a colon,
  and all specified keys will be displayed in numerical order.

  TODO: currently, if you have no 'metadata_query' argument, the status line
  won't show, even if it only has format arguments like %n which are known
  without a query. To work around this problem, pass an empty query, like
  --metadata_query=""

-A <real>, --autopan=<real>

  whether to 'start running'. lux has an 'autopan' mode, where the view pans
  over the image at a fixed rate (which you can modify with the A/O keys).
  If this flag isn't set, lux initially displays a still image and starts
  autopanning only if you press Space. Passing -A... starts lux in autopan
  mode. You can pass any float value. 0.05 produces what I think is a good
  compromise, panning towards the right, but any value will work. Note that
  passing 0 will not disable autopan, but start lux with autopan off.

--slide_interval=<real>

  sets the time (in seconds) from one slide to the next. Note that this
  value will persist; the slide interval can only be modified via the
  user interface, not by passing this option again later on.

--slideshow_on=<yes/no>

  starts lux in slideshow mode (yes) or not (no). Note that this value will
  persist; it can only be modified via the user interface, not by passing
  this option again later on.

  When the slideshow is on and you do not interact with the view for
  <slide_interval> seconds, the next image will be displayed.
  So if you do a multi-image invocation with -d 7, if you just don't
  do anything, the next image will come on after 7 seconds. If you do
  interact, the 'slide show' stops and you have to manually 'tab'
  to the next image. Any interaction will do, even ones which don't
  have a visible effect (like a plain secondary click). One you've
  'Tabbed' to proceed, slide show mode is on again. Note also that when
  slide show mode is on and there are no more images left to be
  displayed, lux will terminate.

-d <real>

  sets the slide show interval and starts in slideshow mode. So this is
  like passing --slideshow_on=yes --slide_interval=<real>. As the two
  long options above, this only has an effect at program startup.

--crossfade=<yes/no>

  If set to 'yes' (the default) lux will crossfade to the next image.

--crossfade_delta=<real>

  This factor affects the speed of the crossfade. For every crossfading
  step, this factor is added to a variable which is initially zero, and
  when it reaches one, the crossfade is complete.
  The default here is 0.05, which is brief, but noticeable with a 60Hz
  monitor.

-l, --process_linear=<yes/no>

  -l or --process_linear=no switches internal processing in linear RGB off.
  Internal linear processing is on per default, but it needs quite a bit of
  processing power: the internal calculations take the same time, no matter
  if they operate on sRGB or linear RGB data, but the transformation back to
  sRGB which has to be done for every frame when using linear RGB internally
  is time-consuming. Nevertheless it's the correct way to handle image
  data, and if you work on the sRGB data instead, you'll notice, for example,
  more pronounced changes in contrast and saturation when switching between
  fast and HQ mode, provided they use different interpolators.
  Switching internal linear processing off is an emergency measure to save
  processing time, but if you're merely having a look at stuff and don't expect
  to go 'deep' you may consider making it one of your 'standard' options because
  it reduces system load quite a bit. You definitely want it 'on' for HDR blending.

-L, --is_linear=<yes/no>

  In contrast to the previous flag, this one is used to tell lux the type of
  the image data in the source image. If you pass 'yes', you tell lux that
  the source data *are* in linear RGB. This flag will only have an effect if the
  image can be linear RGB or sRGB; some image types can only be either, in which
  case -L is simply ignored and the data are handled appropriately.
  So, when your panoramic image is, for example, in openEXR, this flag will be
  'yes' automatically. If this flag is set, this does *not* imply that the data are processed internally as linear RGB: If -l is set, incoming linear data
  are converted to sRGB when they are read from disk and won't be processed as
  linear RGB internally. This goes so far that if your source data are linear
  RGB and -l is set, if you take a snapshot to EXR the (internal SRGB) data have
  to be converted back yet again to linear RGB (for EXR output), which can take
  quite some time. You've been warned ;)

--tonemap=<yes/no>

  When passed true, this option uses a very simple global tonemapping operator
  on the output. The mapping function is:

  out = 318 * ( in / ( in + 255 ) )

  You might also consider this mere dynamic range compression.

  This option should be used in linear light (--process_linear=yes) and will
  force pixels up to ca. 1023 brightness into the 'normal' 0..255 range,
  with the most pronounced effect on the brightest pixels. This option is
  useful for viewing HDR input (like openEXR images) and facet maps with
  --blending=hdr. This is new and experimental; I'd like to introduce more
  tonemapping operators later on. lux now uses 'snap-to-fuse' by default
  and renders an exposure fusion in the background when the viewer is at
  rest. This is nicer than the simple tonemapping, but it does not happen
  instantly for every frame, and it takes quite some time to compute, which
  can be distracting, because the fused view replaces the 'fast' view when
  it's ready and it often looks quite different. 'Proper' tonemapping in
  lux is done by exposure fusion from exposure brackets, or by 'false
  brackets' generated from - preferable - HDR images in openEXR format,
  but this requires lengthy calculations and can't be done in real time.

--snap_to_hq=<yes/no>

  This sets the initial value of 'snap to hq'. The default is to use the
  'hq interpolator' to render single frames when the viewer is at rest,
  or even produce 'proper' stitches/expsoure fusions (see the arguments
  below). The behaviour can be switched on/off with F12 or the GUI button
  labeled 'IDLE HQ', the command line argument only sets the initial state.

--snap_to_stitch=<yes/no>
--snap_to_fusion=<yes/no>

  When the viewer is at rest and 'idle hq' is on (which is the default),
  snap_to_... will cause rendering of properly stitched/fused output for
  facet maps. The default is 'yes' for both options. If they are set to
  'no', lux will still switch to the 'hq interpolator' when 'idle hq'
  is on, but it won't do 'proper' stitches/fusions.

--alpha=auto

  This is the default, if you don't pass any alpha option.
  If your image has a totally opaque alpha channel, lux will notice the fact
  after loading the image, and ignore the alpha channel as if you had invoked
  it with -a. The test costs some processing time, but saves much more later on.
  But even a single slightly transparent pixel is enough to fail this test.
  I had a few sphericals in TIFF done with hugin where I had used hugin's
  automatic cropping. To my surprise they turned out to have a few non-opaque
  pixels in one corner or other, failed the test and rendered slowly. Keep that
  in mind if you get stutter. With JPEGs this is, of course, not an issue.

--alpha=as-file

  provide an alpha channel if and only if the image has one.
  This is like the default, with the only difference that the default
  checks an image's alpha channel (if present) and ignores it if it is
  fully opaque.

-a, --alpha=no

  Unconditionally ignore the source image's alpha channel. This will
  uncover data in transparent areas of the image, because the alpha channel
  is simply ignored. It depends on the source image whether it contains
  intensity values in transparent areas or not.

--alpha=yes

  provide an alpha channel, even if the image has none.
  If the image has an alpha channel, it will be used. If it does not have
  one, a new, fully opaque alpha channel is created and attached to the image.
  Alpha processing will be enabled, with all the consequences: Rendering will
  be slower, EXR snapshots are possible, TIFF snapshots will store alpha data.
  For facet maps, you may want to use --alpha=yes even if your input images
  don't have an alpha channel: rendering with an alpha channel will blur
  the edges slightly, suppressing staircase artifacts. This comes at the
  cost of longer rendering times.

-f <int>, --fast_interpolator_degree=<int>

  degree of b-spline to use for the 'fast interpolator' which is used
  for animated sequences. Here, the quick frame succession will result in
  perceived motion blur anyway, so linear interpolation is perfectly
  adequate (degree 1). If this is too slow, the fastest method is
  nearest neighbour (degree 0), which may produce unpleasant quantization
  artifacts and aliasing at some resolutions. On my system I can run
  lux with -f2 without image degradation with most panoramas. If you have
  potent hardware or if you don't mind slight loss of sharpness due to
  'global scaling', you can try higher degrees as well.
  The default is degree 1, which uses bilinear interpolation. Note that
  this parameter will only have an effect if --build_pyramids is set to
  true, which is the default. If --build_pyramids is set to false, lux will
  use bilinear interpolation as it's 'fast interpolator'. Note also that
  if you pass a value greater than 1, lux will build a 'dedicated interpolator'
  which needs a good deal of memory: about 12 bytes per pixel. Interpolation
  is only used for *magnifying* views - if your current view scales down,
  lux uses 'decimation' instead - a different process with different
  ('antialiasing') filters.

-q <int>, --quality_interpolator_degree=<int>

  degree for the interpolator used in still-image mode. Here we're not
  trying to provide frames as quickly as possible, but instead we want
  high quality interpolation. The default here is a cubic b-spline.
  this produces a good result with little room for improvement, yet
  it doesn't take too long to compute. Try using a higher-degree spline
  here to see if you can spot any difference.
  When passing the same value for both -f and -q, both interpolators
  will share the interpolator, which reduces lux' memory use by
  50% and may be the only option if you are trying to view very large
  panoramas. And if -f 2 works well for you, the difference from a
  quadratic to a cubic b-spline isn't dramatic, so using -f 2 -q 2
  may well be a good compromise for you: it only needs one interpolator,
  and therefore loads more quickly. Again this parameter is ignored if
  you pass --build_pyramids=false. In that case, lux will use a quadratic
  b-spline without prefiltering, which will produce very slight blur
  but looks nicer than the bilinear version. And, as for the previous
  option, interpolation is only used for magnifying views.

-s, --build_pyramids=<yes/no>

  If you pass -s or --build_pyramids=no, lux will not build the 'proper' image
  pyramids for rendering, but instead render magnifying views directly from
  the raw data and scaled-down views from a smaller image pyramid containing
  only levels 1..n, which saves a good deal of memory. With this option
  set to false, you'll get a decent compromise: good-quality interpolators,
  fast rendering, few drawbacks. I even consider making this the default.
  But the interpolation you get with magnifying views won't be as 'crisp'
  as you'd get from, say, a degree-2 b-spline, because the prefiltering
  is not happening. And you won't get 'area interpolation' for scaled-down
  views - this is currently reserved for the 'elaborate' pyramids/interpolators.
  What's a bit confusing is that, with build_pyramids set to 'no', you won't
  get elaborate *interpolators* either, even though, technically, these
  interpolators (used for magnifying views) are not part of the image
  pyramids. Maybe I'll have to think up a better name for tis option...

--build_raw_pyramids=<yes/no>

  by passing 'no' here, you can squash memory use even further, because the
  1..n image pyramid mentioned above will also be omitted, and the least
  amount of memory is used (namely, just as much as the source image data).
  But you'll get (bad) aliasing for scaled-down views. This option is 'on'
  by default, to avoid the bad aliasing. But if you have a very large image
  to show, this may be the only way.

-S <real>, --pyramid_scaling_step=<real>

  'shrink factor' from one pyramid level to the next. this also determines
  how many levels there will be; if the value is small the number of levels
  will rise, possibly beyond a manageable measure. Typical values: 1.25 - 2,
  and lux silently enforces a minimum, to prevent the creation of very 'steep'
  pyramids which need a lot of memory with no discernible effect.
  The default here is 2: each pyramid level will be roughly half as wide
  and half as high as it's predecessor. If you use, say, 1.41, each level
  will have roughly half as many pixels as it's predecessor.
  With 'area decimation' for downscaling, which is now the default, the
  amount of smoothing adapts automatically to the scaling step, because
  this filter is adaptive. When using lux' 'classic' mode of downscaling
  with a b-spline reconstruction filter, you can vary the degree of
  smoothing - a smoothing level of 7 fits well with a scaling step of 2.
  Keep in mind that no scaling operation is 'perfect', especially not the
  methods lux uses, because they are chosen to be fast to compute rather
  than extremely precise. So if you choose smaller scaling steps than two,
  you'll get more downscaling operations, each degrading the image a bit, and
  when this cumulates you may get noticeable blur in heavily downscaled views.

-F <int>, --pyramid_smoothing_level=<real>

  smoothing level. lux now uses 'area decimation' as it's default downscaling
  method; this will work with pyramid_scaling_step in the range of 1-2. To
  use this decimation filter, pass --pyramid_smoothing_level=-1. This is fast
  and the result looks good, so I decided to make it the default. Another
  good quality decimation filter is applied with --pyramid_smoothing_level=-2,
  this uses a binomial filter (1/4,1/2,1/4) for downscaling. lux' 'classic'
  method was using a b-spline reconstruction filter of large-ish degree
  without prefiltering. You can get this behaviour by passing positive
  values, which set the degree of the b-spline reconstruction filter.

  The remarks above about scaling steps less than two apply for this
  downscaling method as well, so only pick a scaling step other than two if
  you need to. When passing -2 here, your scaling step should not be too
  far off two. The classic downscaling method is using a b-spline
  reconstruction filter of the degree passed to this argument. lux' standard
  was 7, since a reconstruction filter for a heptic b-spline is close to
  a 'standard' Burt filter. Use a value below 7 for less smoothing (like, when
  you use a 'shrink factor' below 2). This is a matter of taste, really, and
  the differences are quite hard to tell. You want to use a level of at least 2,
  because levels 0 and 1 don't produce any smoothing at all, and level 0 does
  not even interpolate, so you get bad aliasing. Level 7 reconstruction sounds
  like as if it takes lots of processing, but since we're only scaling, we can
  use vspline's grid_eval method which is a good deal faster than ordinary remaps.

  There are now additional downscaling filters available, which are activated
  by passing negative numbers and should be used with scaling steps near 2.0:

  - pass -3 to use the biomial kernel ( 1/16 * ( 1 , 4 , 6 , 4 , 1 ) )

  - pass -4 for an 'optimal Burt filter'.   This is taken from vigra,
    see the function vigra::initBurtFilter, online docu at
    https://ukoethe.github.io/vigra/doc-release/vigra/classvigra_1_1Kernel1D.html#a1406a301a1cc659b3098bbcc0a827228

  - pass -(4*N-1) with N >= 2, to use an FIR halfband filter with (4*N-1)
    taps. The filter is constructed using the method given in
    https://www.dsprelated.com/showarticle/1113.php
    It's a truncated sinc with a hamming window. Typical values here would
    use small-ish N, the larger N becomes the less additional effect you get,
    so try -7 or -11 and only proceed further if you have good reason to do so.

  Especially using the half-band filters should produce near-optimal results,
  if my theoretical reasoning is correct: An optimal half-band filter completely
  removes the upper half of the spectrum. For use in lux, the filter is
  'piggybacked' on the b-spline evaluator, using a 'convolving basis functor',
  and the resulting hybrid evaluator yields a continuous signal which is
  equal to the signal you would obtain from first half-band-filtering the
  original data and then erecting the spline over the result: All arithmetic
  steps (the low-pass FIR, the b-spline prefilter and the evaluation) are
  independent. So the continuous signal we obtain is band-limited to the lower
  half of the spectrum. b-splines have the interesting property that if they
  lack frequencies in the upper half of the spectrum, they become near-immune
  to resampling, meaning that any unit-spaced sampling of the spline will produce
  values which can be used to erect a spline which will coincide with the
  previous one. This property is not 100%, but with rising spline degree it
  approaches 100%. This implies that resampling the spline at unit intervals
  will yield a set of values which are 'just as good' to represent the signal
  as any other unit-spaces sampling. If we pick the unit-spaced (or nearly so)
  sample locations so that they coincide with the intended knot points of the
  next pyramid level, we have (after decimation) an ordinarily band-limited
  discrete signal and can build a spline from it without aliasing or resampling
  artifacts, so we're one level further and can do the next iteration.

  A bit of technical background: all downscaling filters lux uses catch two
  birds with one stone: the low-pass filter and the subsampling are lumped
  together in one handy step using a grid evaluation on the 'current' level
  to get the 'next' level of the pyramid. This approach makes it possible to
  use a sampling grid which does not coincide with the sample positions of
  the 'current' level, as would be required for the 'normal' process of
  using a smoothing filter, followed by decimation. It's fast and efficient,
  and with area decimation the effect is roughly as good as a binomial filter
  followed by decimation by a factor of two. But the 'freedom from the grid'
  makes it possible to use a subsampling grid which *preserves the boundary
  conditions*: if the current pyramid level has, for example, periodic or
  reflective boundary conditions, the 'next' level will as well, and, when
  fed appropriately scaled and shifted coordinates (using a vspline::domain)
  it will behave as the 'current' level, only yield smoothed values instead.
  It's clear that this 'boundary equivalence' would only be possible for
  a few rare exceptions of 'current' grids (periodic grids with even sample
  counts, mirror boundaries with 4n+1 samples) when using filter+decimation,
  and certainly not for the reflective boundary conditions which lux mainly
  uses. Most grids can't be decimated to produce a 'boundary-equivalent'
  down-scaled version. If you're interested, have a look at the decimation
  code; it's in pv_rendering_common.cc, find 'make_decimator'.

  On the downside, off-grid subsampling with a decimator isn't easily tackled
  mathematically - especially not with the 'area decimator', which does not
  have a fixed transfer function. Initially I tought this might be a problem,
  but I found the results satisfactory and could not detect any drawbacks.
  So I'd say: the proof is the pudding. As mentioned above, the desired
  behaviour should be approached best using a half-band filter for
  downscaling. This takes more time to set up, but small half-band filters
  like 7-tap or 11-tap are not much slower than, say, a Burt filter.

--decimate_area=<yes/no>

  Set to 'yes', this activates 'area decimation' for animated sequences.
  Still images are *always* scaled down with area decimation and are not
  affected by this argument.

  While magnified views use an interpolator, scaled-down views rely on
  image pyramids and 'decimation', which can use quite different calculations
  to interpolation, trying to avoid issues like aliasing, moirees etc.
  The default for this option is 'no'. This is lux' 'classic' mode: from the
  image pyramids, the level which is closest in resolution to the intended
  display is picked and used with b-spline interpolation. If you pass 'yes'
  instead, lux will use 'area decimation' from the next-better pyramid
  level. 'Classic mode' with the default 'pyramid scaling step' of 2.0
  can produce quite visible 'jumps' in sharpness when switching from one
  pyramid level to the next. You can lessen the 'jumps' by using a smaller
  scaling step - the drawback is higher memory use and slower startup.
  To lessen the jumps I implemented 'area decimation' for lux, which uses a
  'scalable filter' and produces much less visible jumps when switching
  to another pyramid level. All of this is hard to see, but if you want
  to get a good look at the inner workings, try the 'blow up mode',
  which will not simply zoom in more, but magnifies the image without
  switching to a different pyramid level. To closely inspect the level
  switching and the decimation process, you may even want to increase the
  magnifying glass factor to, say, 100 instead of the default, 10. When
  decimation is done with a large half-band filter, the 'blow-up'
  view will show ringing artifacts due to the mismatch of screen resolution
  and pyramid level. This does not indicate an error! With the blow-up
  switched off, artifacts at that magnitude are not relevant and
  should be invisible.
  'area decimation' is slower than 'classic' mode using linear interpolation,
  but roughly as fast as using a quadratic spline - it has the same support
  as the quadratic spline, but the basis function is a bit quicker to compute.
  When area decimation is used, the pyramid level will be calculated by
  always using the next-better pyramid level and *adapting* the size of the
  filter to the current zoom factor, resulting in smoother transitions from
  level to level - 'classic' mode will switch based on the rounded scale, so
  it'll switch sooner, when it's more noticeable - but this also makes it
  faster, because it has to 'condense' a less-spread-out memory area.
  So the advice here is to use decimate_area=yes if the system can handle it,
  and fall back to not using it if the animation stutters - or to use
  moving_image_scaling to reduce load. Note that this option only sets
  the initial value, you can toggle the mode anytime at run time by
  pressing 'D'.

--squash=<integer>

  removes levels from the image pyramids. If the source image's resolution
  is too high, this argument can be used to remove levels from the pyramid,
  for example to reduce memory load. cubemaps will also be affected by this
  value. facet maps will only be affected if no per-facet values are passed
  with '--facet_squash=...'.
  Note that squashing totally removes higher-resolution data: the interpolator
  for magnifying views will have the same resolution as the pyramid's 'best'
  level.
  With this mechanism it's possible to process very large source images
  which otherwise would require too much memory: the squashing is done quite
  early after loading the image data from disk. The drawback is, of course,
  that the 'best' pyramid level will be some scaled-down version of the original
  image and will therefore lack some detail, but this may still be considered
  better than overly high memory consumption. It's especially useful for facet
  maps consisting of many images, which normally needs a lot of memory. With
  squashing, you may be able to load a facet map which would otherwise exceed
  your system's capacity. With per-facet squashing, you may opt to only
  'squash' very high-res facets and leave others intact.
  Note that this option removes *the highest-resolution levels first*.
  Nowadays, many cameras and smartphones produce images with many megapixels
  more than the lens would justify, and 'squashing' them has no adverse
  effects. The process is similar to 'pixel binning', but due to lux'
  downscaling code, the process is more involved and not strictly 2:1.
  The downscaling is done with the chosen decimation method.

-m <real>, --moving_image_scaling=<real>

  This value affects animated sequences - moving images - like pans or zooms.
  Animated sequences are computationally intensive, because lux calculates
  each frame 'from scratch'. Depending on the current pixel pipeline, this
  may exceed the host system's capacity, and the result is dropped frames,
  visible as 'stutter'. Just how much processing power is needed depends
  on many factors, but there is one common handle to lower processor load:
  the size of the frames. If you calculate small frames which show the same
  content, this takes - roughly - proportionally less time. To make them
  *look* roughly as the 'correctly sized' frames, you pass them to the GPU
  with the proviso that they should be enlarged to the desired size. This
  magnification is done entirely - and very efficiently - by the GPU, so
  you have more CPU cycles to deal with the demanding animation.
  moving_image_scaling is a factor which is applied to frame size to get
  this effect. To lower computational load, you use a factor less than one.
  If, on the other hand, you have processing power aplenty and want to get
  animation quality up, you can pass values above one: this will result in
  the rendering of oversized frames which are scaled down by the GPU,
  just as it can be done for still images (see the option below), which may
  benefit image quality (if you don't overdo it).
  The setting is affected at runtime by changing the 'animation quality'
  with the GUI (buttons AQ UP and AQ down), where the strengt of the effect
  is given in percent, referring to number of pixels - or area, whereas the
  moving_image_scaling factor is a multiplicative factor affecting image
  size. So 'animation quality' set to 25% is the same as moving_image_scaling
  of 0.5. Since 'automatic rendering quality' is now the default, you have to
  switch the automatic off to get a lasting effect with this argument.

-M <real>, --still_image_scaling=<real>

  sets the still image scaling factor. When the image display is at rest,
  lux renders an image with it's 'quality' interpolator. This image can be
  rendered  *larger* than the screen area it will occupy ('supersampling'),
  and the GPU will compress it to fit into it's designated screen area.
  If you're not overdoing it, this may give a crisper still image. The
  default here is 1, which does not magnify.
  So this factor works just as the one above: it modifies the size of the
  frame, not it's visible content. Rendering larger frames for still images
  takes more time, but since the still image is only rendered once, the extra
  time makes little difference: the user won't notice if this single frame
  takes 40ms to render instead of 20ms. Try it out if you're not satisfied with
  the quality of your still images, but don't overdo it.

-I <real>, --magnifying_glass_factor=<real>

  sets the maginification when pressing 'I' or Shift+I'. 'I' uses a fixed zoom factor, but Shift+I keeps the currently active pyramid level or interpolator,
  allowing to inspect in greater detail how the image is composed at the
  pixel level. The default value here is 10.0. You can use the 'magnifying
  glass' for example to see when the pyramid level is changed when zooming
  out, which is often hard to see without magnification. Once you have found
  the 'right spot' where a bit of zooming in or out affects the level switch,
  you can switch the magnifying glass off and inspect whether you can still
  perceive the switch.

-E <string>, --snapshot_prefix=<string>

  sets the prefix used for single images extracted by pressing 'e'. The
  default is to use the source image's filename suffixed with .lux.
  If you pass -E xyz the images will be named xyz1.jpg, xyz2.jpg ...
  Snapshots will be done using the high-quality interpolator.
  This option also affects stitches and image fusions etc.

--snapshot_basename=<string>

  This option forces the snapshot base name, and the resulting image will
  be named by combining this base name with the snapshot extension, with no
  intervening infixes.

-e <string>, --snapshot_extension=<string>

  Set a different snapshot extension/format. You can use any extension
  JPG, jpg, TIF, tif, PNG, png, EXR, exr. JPG and PNG snapshots will use 90%
  compression by default, and you can choose a different compression with
  a snapshot_compression argument (see below).
  Why only these formats? Because lux image output hasn't yet moved to
  OpenImageIO and is still using libvigraimpex.
  TIFF snapshots will be done with 16bit depth and contain an alpha channel
  if alpha processing is enabled.
  EXR snapshots will be accompanied by a .lux file with appropriate metadata
  to display the EXR data with correct geometry when you open this .lux
  file again with lux.

--snapshot_compression=<real>

  If the chosen snapshot format supports compression, you can pass the
  desired compression level with this argument. Compression is given as
  a percentage, so sensible values are up to 100.

-X <real>, --snapshot_magnification=<real>

  sets the snapshot magnification. Snapshots are taken with no magnification
  by default, unless -X is passed, which overrides the default. Note that
  this magnification produces an image which is like what you'd see with a
  window of the modified size, even if that is larger than your screen. It's
  not just a blown-up version of the current view, but instead calculated
  from scratch, using 'high quality' rendering.

  This parameter is needed to produce snapshots of a certain size: the shape
  of a 'normal' snapshot will always be the same as the shape of the current
  view: either the shape of your screen when you're in full-screen mode, or the
  shape of your display window. Suppose you want to stitch a full spherical
  panorama sized 6000X3000. You'd use a display window with a 2:1 aspect
  ratio and the appropriate snapshot magnification, like

  --fullscreen=false --window_width=1500 --window_height=750
  --snapshot_magnification=4

  Note that snapshots done with 'Shift+E' (so-called 'source-like' snapshots)
  take their metrics from the source image/facet or a PTO file's p-line
  instead, and such snapshots are not affected by snapshot_magnification;
  to change output magnification for such snapshots, use output_magnification
  (seee below)

--output_magnification=<real>

  I document this option here, next to snapshot_magnification, because it
  has a very similar effect. It's a magnification factor applied to
  'source-like' snapshots, which are not affected by snapshot_magnification.
  If you do, e.g, a source-like snapshot of a panorama 1000X500 pixels
  large and pass --output_magnification=2, then the source-like
  snapshot will be 2000X1000 pixels large.

  If the output is cropped (due to a crop specification in a PTO file),
  the cropping is scaled proportionally, but the scaling is rounded to
  integer values, so proportionality may not be 100% perfect due to
  roundoff.

--snapshot_like_source=<yes/no>
--snapshot_facet=<integer>

  This option tells lux to take a snapshot with the same projection and
  aspect ratio *as the source image*, or one specific image in a facet
  map or cube map - which can be chosen by passing 'snapshot_facet'.
  If there is no snapshot_facet argument, the first facet is picked by
  default. Note that numbering is C-style and starts with zero.
  So, while 'normal' snapshots take their aspect ratio and base size
  from the current view, 'source-like' snapshots take it from a source
  image. And while 'normal' snapshots produce an image in the given
  target projection, 'source-like' snapshots use the source image's
  projection. 'source-like' snapshots are meant to produce images which
  might be used instead of a given source image, with all modifications
  applied by the viewer.

  Snapshot magnification is still applied, so you can easily produce
  scaled 'source-like' snapshots, for example when you produce snapshots
  for web export:

  lux --snapshot_magnification=.33 --snapshot_like_source \
      --snapshot_compression=60 some_image.jpg

  There is one important point here when doing 'source-like' snapshots,
  stitches or fusions done from PTO files: For such output, the 'p-line'
  in the PTO file defines the output's projection, size, field of view
  and cropping. With this mechanism it's easy to use lux as a stitcher
  if you have PTO input: either you load the PTO into lux and then press,
  'Shift+E', or you batch the process by using --next_after_stitch=yes.
  So to stitch a panorama from a PTO automatically and without user
  intervention, you invoke lux like this:

  lux --next_after_stitch=yes --snapshot_like_source=yes pano.pto

  And to do an exposure fusion from a PTO, use

  lux --next_after_fusion=yes --snapshot_like_source=yes pano.pto

  snapshot_like_source is useful to 'imbue' an image with HDR information
  from an exposure bracket, used together with --blending=hdr. Another
  option used in this context will likely be --snapshot_extension=exr,
  and it works well with next_after_snapshot (see below), making it easy
  to process sets of exposure brackets in one go. If you have brackets in
  folders off pwd, with the registration in 'bracket.pto', you'd do something
  like this:

  lux --blending=hdr --snapshot_like_source --snapshot_facet=0 \
      --next_after_snapshot --snapshot_extension=exr \
      --snap_to_fusion=no */bracket.pto

  This will place an exr snapshot of each hdr-blended bracket next to the
  pto file defining it. Note the use of '--snap_to_fusion=no' in this
  example: it tells lux to produce uncompressed HDR output. With
  snap_to_fusion on (which is the default) you'd get exposure-fused output
  instead. While the output is created, you'll briefly see
  each image as it is processed. You might prefer not to 'imbue' the first
  facet: if your camera produces, say, the shortest exposure as the second
  shot of the bracket and you'd like that shot to be 'imbued', just add
  --snapshot_facet=1. It's wise to 'imbue' the 'darkest' shot: in it's
  boundaries, it will have the most valid intensity values, because
  overexposure is least likely in the 'darkest' shot. The data it holds
  may be noisy when 'pulled up', but the noise will show only where the
  'brighter' facets don't provide usable content, which is only a small
  part of the image near the margin if the facets don't overlap perfectly.
  So you may get thin stripes with noisy data - rather than thin stripes
  with overexposed pixels which are definitely worse.
  Keep in mind that, when --snapshot_facet is not passed and the input
  is a PTO file, the output will match the PTO's p-line, and it will often
  be cropped. If you intend to use the fused brackets to, e.g., stitch
  a panorama, this may be a problem, because you'd prefer uncropped images
  with equal FOV. So to produce input for panoramas, you're probably better
  off with --snapshot_facet set explicitly to one of the source images
  in the bracket, never mind the slight artifacts near the image boundary.

--next_after_snapshot=<yes/no>
--next_after_stitch=<yes/no>
--next_after_fusion=<yes/no>

  This option will take a snapshot/stitch/exposure fusion of the current
  view and proceed to the next image - if there is one. Passed on the
  command line, it will affect all images which are passed to lux, they'll
  show briefly, the ouput is rendered, then the next image is treated in
  the same way...

  This seems like a strange option in a viewer, but if you want to use lux
  from, say, a script, to produce image files, this option is used to
  make lux process all images in turn with the given parameters, storing
  the results as images. You might even use this mechanism to create
  images which you display later on - for example, a set of cube faces
  for a cubemap, which you create before displaying the cubemap. You
  have control over name, shape and size of the snapshots, so you can set
  up quite elaborate sequences. One typical use is to have an ini file
  with a set of registered source images which you'd like to render into
  a panoramic image - xx.lux in this example, which holds facets for a
  full spherical, to be rendered as 6000X3000 openEXR file:

  lux --feathering=50 --target_projection=spherical --fullscreen=false \
     --window_width=1500 --window_height=750 --snapshot_magnification=4 \
     --hfov_view=360 --snapshot_prefix=pvstitch --snapshot_extension=exr \
     --next_after_snapshot xx.lux

  Again, note that when using this option with PTO files as input, the
  default is to render to the specification given in the PTO's p-line.
  Note that 'regular' snapshots include HDR-blended and deghosted images,
  which are done with specific interpolators, so use --next_after_snapshot.
  Exposure fusions and focus stacks are, technically, both exposure fusions,
  so use --next_after_fusion. Panoramas need --next_after_stitch.

-G, --grey_edge=<yes/no>

  enable or disable fade_to_grey
  Per default, single-image interpolators are set up to fade at the image
  edges. This does not cost extra processing (except for a little bit when
  the interpolators are first built) and reduces sawtooth artifacts along
  the image margins. Rarely, this is undesirable, for example if you look
  at very small images, where the fading margins become quite wide due to
  the large magnification needed to pull small images up to screen size.
  -G or --grey_edge=no will switch fade_to_grey off.

  Note that this is unconditionally switched off for cubemaps, and for
  facet maps without alpha processing.

-r <int>, --frame_rate_limit=<int>

  set a frame rate limit. this will only have an effect if
  synchronization with vsync is off (-u). Passing a value <= 0
  has the same effect as not using this flag. Setting the frame
  rate like this does not work for me, I get bad tearing or stutter.

-u, --use_vsync=<yes/no>

  switch synchronization with vsync on/off. -u is equivalent to
  --use_vsync=no. If a frame rate  limit is set (-r), lux will use SFML's
  setFramerateLimit() to fix the frame rate, but if it isn't set, lux will
  go as fast as it can. if you want to use this 'fast as it can' mode, you
  should also set a fixed frame time budget, because auto-budgeting when
  running at full speed will make the budget go down to the actual speed,
  which in turn may rise global scaling, which speeds rendering up, which
  decreases the budget... you'd end up rendering very small frames very
  quickly which is probably not what you want.

-z <int>, --stop_after=<int>

  set a frame number limit. This will go to the next image after
  the specified number of frames has been rendered. This can be used
  for automated testing/benchmarking: I like using a 1000-frame pan
  over a full spherical to benchmark my code, like
  lux -ps -h360 -A.05 -z1000 spherical.tif

-P, --allow_pan_mode=<yes/no>

  Passing -P or --allow_pan_mode=no will start with pan mode processing
  disabled. The default is to allow pan mode
  processing, which uses less processing power when performing 'pure' pans
  on spherical and cyclindric images. At run-time, this flag can be toggled
  with F8. At times (like, for benchmarking) you may want lux to start with
  pan mode processing off, hence this flag. So with pan mode processing
  disabled, you can still use autopan, but it may be a bit slower.

-g, --auto_quality=<yes/no>

  Passing -g or --auto_quality=yes sets moving image scaling to 'automatic'.
  lux will try to adapt to the system load and lower or raise the moving image
  scaling factor appropriately, starting out with a scaling factor of 1.0.
  This may or may not work well; finding the right heuristics was based on
  trial and error, and the system may overcompensate or not react quickly
  enough. If you rarely have issues with dropped frames, you're probably
  better of not using -g and manually adapting this scaling factor
  (by pressing M/Shift+M or via the GUI) if needed - or passing a fixed
  scaling value with -m/--moving_image_scaling at program start.

-b <real>, --budget=<real>

  This flag only has an effect when -g is set as well. It  sets the 'frame
  rendering time budget' (in msec). When this flag is not set, lux
  automatically settles on a budget which is just some 20% under the GPU frame
  time (so, if your system is running 50fps, GPU frame time is 20msec and the
  budget will be fixed near 16msec). If you pass a smaller value here, you can
  force lux to render frames within the budgeted time, if necessary using
  'global scaling' to sacrifice image quality for speed. This will only work
  if the global scaling factor is not fixed with the -m flag. Passing a value
  larger than the GPU frame time will not have an effect unless the actual
  frame rendering time exceeds the GPU frame time. Then your display will
  start stuttering.
  using -b with small values can be helpful to force lux' resource use down,
  but most of the time you'll want lux to use it's auto_budget mode. If you're
  going too low here, you'll get blurred and unstable animations.
  You also want to use this flag if you're running lux in full-throttle mode (-u)

-N <real>, --snappiness=<real>

  'snappiness' steers the overall responsiveness of the UI. Raising it beyond
  it's default of 0.005 will make lux respond more strongly to user input. At
  run-time, you can use the X key to the same effect. This is currently limited
  to interaction affecting the size, shape and orientation of the view, not
  it's brightness or hue.

-R, --reverse_drag=<yes/no>

  inverts the effect of click+drag with the primary mouse button

-Z, --reverse_secondary_drag=<yes/no>

  inverts the effect of click+drag with the secondary mouse button

-n, --suppress_display=<yes/no>

  -n suppresses the display of frames. this is used for benchmarking, best combined
  with -m1, -u and -z. If you're running length batch jobs - like stitching all
  PTOs in a folder - the easiest way to no get to see what lux is doing is to
  start it in window mode (-W) and then minimizing the window. Alternatively,
  your OS may offer an option to start lux readily minimized.

-i [5:4:3:2:x:f:p], --isa=<string>

  This flag is only useful on intel/AMD builds at the moment, where lux can
  support several ISAs in one binary. Use this option to force use of a specifc
  ISA. 5 is for AVX512f, 2 is for AVX2, x for AVX, 4 is for SSE4.2, 3 is for
  SSSE3 and f for fallback, which is the same as p for 'plain'. It's not
  guaranteed that the ISA will actually be supported by a given build - lux will terminate if an ISA is specified which it does not have built-in.

  When -i is not used, lux will try and figure out which ISA is best for the
  CPU at hand, but this requires a build which can do CPU detection - not all
  builds do. If you know what ISA your current CPU uses and lux fails to
  detect it, you can use -i to specify the correct ISA.

  If you force use of an ISA which your CPU can't handle, lux will crash with
  an illegal instruction. The default here is --isa=auto.

--clear

  when encountered, this option has the same effect as setting every
  option to it's default value. This will *not* discard the 'persistent'
  arguments from the initial command line (unless it's issued in the
  initial command line, where it would cancel all preceding arguments).
  It's typically used in ini files standing in for image files, to get
  a clean slate without having to specify overrides for every possible
  option that may or may not be active.

There are more options, but they pertain to synoptic displays of several images, and are not usually passed on the command line, but instead occur in lux files, or are inferred from PTO files.

The next two options introduce the notion of 'lux ini files', files which contain key-value pairs where the key is a long option name and the value it's intended value. These files can be used in two ways: when lux is invoked, they serve as a source for (additional) arguments which are processed as if they had been written out like ordinary command line arguments - except for the '--' used on the command line to introduce them. On top of that, ini files which contain an 'image' option can be used instead of image files, with the result that the specified image is displayed after applying all other arguments in the ini file. you can use the --image=... syntax explained below, or simply pass 'image-like ini files' as trailing arguments on the command line, like any other image file:

--image=<filenname>

  This option adds an image file to the list of images to be displayed,
  just as if it had been specified as a trailing argument on the command line.

  Note that .lux files may be passed *instead of image files*. The effect
  is that the lux file is processed when 'it's time comes' (it may be queued,
  like any other file specified with --image or as trailing parameter).
  This mechanism is used to 'bundle' an image with arguments intended
  specifically for it. The non-image 'code' in the lux file can set up
  the arguments as it sees fit, and the image will be displayed
  as intended. After display, all the changes to the arguments are undone
  (a fresh copy of the initial state right after lux' invocation is copied
  to 'state'), and the lux file which was used 'instead of an image' passes
  through the usual queues like any other image, so that when it's viewed
  again, it will again load it's specific arguments.

  So you can, for example, have a cubemap described by a lux file, and add
  additional arguments in that file which you want to use only for this
  cubemap. Once the next image - or lux file - loads, these additional
  arguments will be 'forgotten'.

-c <filename>, --ini_file=<filename>

  specifies an initialization file, a file which simply contains lines with
  name=value, where 'name' is the name of a long option, and 'value' it's
  intended value. Apart from that, any line starting with # or white space
  is treated as a comment, allowing for commented parameter sets. Passed
  like this, the ini file's content is parsed as soon as the option is
  processed, just as if it's content had been passed as arguments instead
  of the ini file. ini files can invoke other ini files: just add a line like

  ini_file=another.lux

  And "another.lux" is processed *as soon as the parser encounters it*.
  Note that, like in long command line arguments, white space is not allowed
  before the = separating argument name and value. White space after the
  separator becomes *part of the value*, so if you pass, e.g. file names, make
  sure there is no intervening space.

  reading ini files is recursive: if the ini file itself has an ini_file=...
  entry, this entry is processed next. The capacity for recursion is due
  to the entry for 'read_ini' in optarg, not due to some 'special magic'.
  There is no artificial limit to recursion depth and no protection against
  infinite loops due to recursion. Use with caution.

  I now require using the .lux file extension. Please note that lux files are
  not quite the same as 'common' ini files, even though the syntax is very
  similar: lux does ignore 'section' specifiers (the lines in 'normal'
  ini files containing section headings enclosed in brackets).

  There is one 'special' lux ini file: the file named .lux.ini in your home
  folder. This file is always read - if present - and processed as if it had
  been passed with -c.

  Note that there is a difference between lux files passed 'instead of images'
  using --image=... and ini files passed as such using --ini_file=...
  The first case ignores all image specifications except the first one,
  and the ini file is queued and replayed like any image and *only evaluated
  once it's loaded 'as an image'*.
  If you use --ini_file=..., this simply reads more arguments from the ini
  file, just as if you'd written them out on the command line instead, and
  The options in the ini file are processed *as soon as the parser sees it*.

There's another option in this context:

--path=<pathname>

  This option introduces a path which is prepended to image filenames
  (and ini filenames used as images) if the filename does *not contain
  path separators*. This option is automatically set when an ini file
  is *read* by lux to contain the path *of the ini file*. Please see the
  section 'Acceptable Input' above.

User Interface

The UI is preferably via the keyboard and the mouse. This list gives the commands roughly in decreasing importance. Many of the commands can also be affected via the GUI. The GUI is also the only way to see numerical values for some settings, like the zoom level or the brightness, and the GUI is also the only way to enter numerical values for such settings. Some numerical fields are editable. If you set or unset options, you need to click one of the the 'Apply' buttons at the bottom of the panel. You can choose to apply the option(s) to the current view or to the entire session.

lux will also display a status line while it loads images, produces interpolators or has snapshots/stitches/exposure fusions running in the background. You can switch the status line on/off with the --show_status_line command line argument and toggle it's visibility with the 'V' key.

Let' start with mouse commands. The majority of mouse commands in lux are click-and-drag gestures: the gesture is initiated by depressing a mouse key and ended by releasing it. During the gesture, it derives intensity and direction from the displacement from the initial click position. This direction and intensity drive the effect. This sounds complicated, but it's really quite simple. Let's take the direction of your view as an example. Primary-click anywhere, hold the click, then move the mouse a bit to the right and hold it there. The view will keep moving until you move the mouse back to the initial click position or until you release the mouse button. This feels strange to people who are used to click-and-drag gestures where the view acts as if it were glued to the mouse pointer, but it's the only way how to affect long sweeps spanning more than one screenful. It's also much easier to just hold the mouse at a fixed position than dragging it at a constant speed, so the moves become smoother. To express it scientifically: the click-drag-gesture modulates the effect's first derivative. You're not directly affecting, say, the gaze direction, but it's change. This is inspired by 'QTVR mode' in 360cities.

To differentiate between a short click-and-drag and a simple click, lux checks the time difference between button depression and release and the displacement which occured. If the time difference is less than 300 msec and the displacement is deemed negligibly small, lux assumes a simple click. Looking at the time difference is a recent introduction and helps with cases where the user wants to produce a delicate effect, The time difference is checked first, so as soon as the click is held longer than 300 msec the gesture will be considered a click-and-drag. If the time difference is shorter than 300 msec, a sufficiently large displacement will also be interpreted as a click-and-drag. Another recent addition is the 'slap' gesture: a primary mouse button click while the mouse is in horizontal motion. This will start/affect the automatic pan.

So here is a list of which mouse gestures lux understands:

primary mouse button click:

  With lux 1.1.6, there is a new gesture, which I call the
  'slap' gesture. It's a primary mouse button click executed
  while the mouse is in horizontal movement. The click has
  to be relatively short: the gesture is only recognized if
  the depression and release of the mouse button are no more
  than 300 msec apart. If the click is longer, it will be
  interpreted as a click-drag instead, and if the mouse is
  not in horizontal motion, it will be interpreted as an
  'ordinary' single-click.
  The new gesture modulates autopan to intensify in the same
  direction that a click-drag would go. This may cumulate, but if
  the direction is reversed, it cancels the opposite spin first.

  An ordinary primary-button single-click halts the autopan, and
  if there is no active autopan, it has no effect but marking an
  interaction with the viewer, which halts an active slideshow.
  Up to lux 1.1.6, a primary-button single-click did always
  toggle autopan, but I felt this resulted in the autopan going
  off unexpectedly as a result to a sloppy click, which can be
  annoying and requires the user to remember that another click
  will stop the autopan. With the new gesture, which is harder
  to execute inadvertently, I feel the UI is less prone to
  produce unwanted effects.

secondary mouse button click:

  'focus here': move the clicked image point to the window's
  center. This is also new with lux 1.1.6 and became possible
  with the focused-zoom code. The effect can be slightly
  confusing, especially when it happens without intention,
  but it's a nice feature, playing well with the center-focused
  gestures (vertical secondary-button click-drag for zoom, and
  the key-mediated zoom alterations with +/-, 2, 3)

middle mouse button click:

  show the next image in line if there is one. This is the
  same effect as the TAB key.

primary mouse button click+drag:

  modify yaw and pitch by distance from click point. This is the
  central gesture to navigate in a view with the mouse. It's
  inspired by 360cities' QTVR mode and allows to maintain movement
  of the virtual camera steady over an elongated time period,
  which is quite impossible with the 'standard' click-drag gesture
  used by many image viewers, which acts as if the mouse pointer
  where 'glued to the view': that gesture will move the view
  at the same pace as you push around the mouse, which is hard to
  do at a steady pace and usually ends when you run out of table
  space. lux' primary-button click-drag modulates the intensity
  of the movement with the distance of the mouse position from
  the click point, and the direction by the direction of the
  drag. The movement stops as soon as you release the mouse button.
  The initial position of the click has no specific meaning for
  this gesture, you can start anywhere where you have enough table
  space to displace the mouse enough to produce the effect. Note
  how this gesture can be misinterpreted as a 'slap' gesture
  (see 'primary mouse button click' for that) if the mouse button
  is released very quickly.

primary mouse button click+drag with SHIFT:

  vertical primary-click-drag rotates the virtual camera

primary mouse button click+drag with CTRL:

  raise/lower horizon (for images supporting this operation)
  this is the same effect as the H/Shift+H keys.

secondary mouse button click+drag:

  vertical: zoom, or optionally a 'focused' zoom; up zooms in,
  down zooms out. If scvd_focused_zoom is set true, this zoom
  is 'focused' at the click position and will keep the content
  at this position (roughly) steady - the default is to use a
  center-focused zoom like +/- or 2/3

  horizontal: modify brightness; right brightens, left darkens

mouse wheel: zoom, or a 'focused' zoom if scw_focused_zoom is set.
  The default here is to use the focused zoom, keeping the content
  at the mouse position where the roll occurred (roughly) steady.

The 'focused zoom' is an approximation - near the nadir and zenith it may behave 'funny' trying to keep the vertical, which may move the content in unexpected ways. When navigating near the 'poles', you may want to switch of the lock by pressing F2. You'll lose the vertical but you can regain it anytime by pressing 'J', and once you're done navigating near the poles you can press F2 again to re-lock the vertical. To avoid problems with the focused zoom behaving 'funny' you can also use the 'click-to-center' feature to center the view at some point of interest, and then use an unfocused zoom with the +/- or 2/3 keys.

lux has some 'game genes', and with this heritage comes a large set of keyboard commands. The keyboard commands come in two 'flavours': some keys simply produce a momentary action, like switching something on or off. I refer to these actions as 'acute', setting them apart from keys which are held to produce an effect: I call such actions 'chronic'. If you merely press and release a 'chronic' key, you'll see some equivalent of a small leap. If you hold an 'acute' key pressed down, your keyboard driver will eventually start auto-repeating the key, producing the same effect as if you were repeatedly pressing the key. Most of the time the 'nature' of a key should be obvious from it's function. The distinction between acute and chronic key commands is like the distiction between a mouse button click and and a click-and-drag. If you look into the source code, user interaction is mapped to simple short functions starting with 'on' or 'do'. The 'on'-Functions stand for acute effects, and the 'do'-functions for chronic ones.

The keyboard is scrutinized very closely, and you may combine several keys - how many you can combine depends on your keyboard and the driver. Common combinations are, e.g., yaw+pitch for a diagonal movement.

Quite a few keys, which trigger a directed action, use the same key combined with Shift to reverse the effect. This makes it easier to remember the right key, and you can hold the key depressed and apply Shift on top to reverse the ongoing effect.

If you get confused about mapping of keys to directions, the rule of thumb is that lux tries to provide an interface which simulates operating a (virtual) camera. So 'down' in lux means 'point the lens of the virtual camera downwards', which makes the things seen on screen move upwards on the screen. It is the same confusion as the meaning of 'scroll up' or 'scroll down'.

Before launching into the keyboard commands, please note that a lot of stuff which was only available via keyboard commands can now be affected with the GUI - there just isn't any documentation about this in this README. But the keyboard commands work just the same as before the introduction of the new GUI.

Here's a list of keyboard commands, roughly in decreasing order of importance:

Escape  close all GUI panels. If no panels are open, end the program.

Tab    proceed to next image

Shift+Tab  go to previous image

F11 or B  toggle full screen/windowed display
          mac users, please use the window's control instead!

Return  start scene afresh (from last captured bias with 'sane' parameters)

F      open a file selection dialog to select one or several images to be
       displayed next. If the queue already holds images, the new selection
       will push the queued images further back: they will show after the
       new selection.

1      '1:1' - zoom to (roughly) 100%, so that one source image pixel
       coincides with one screen pixel. The match is made looking at
       the image center, due to geometrical distortions of the various
       transformations the 1:1 relation will not hold everywhere.

2      double zoom factor

3      halve zoom - down to a sensible minimum

4      jump clockwise by one full quadrant horizontally (90 degrees)

T      jump clockwise by 75% of horizontal field of view - anticlockwise
       with Shift+T

Space  stop/start automatic panning

O     (letter o): reverse automatic panning direction

+, Z   zoom in. Note that the plus/minus keys may not be recognized on
       every keyboard, hence the alternative Z and Shift+Z keys. This
       zoom is focused on the view's center, like what you'd get when
       pressing the zoom button on a camera. Some mouse-mediated zooms
       are 'focused' at the mouse position instead. Bothe the 'normal'
       '+' key and the one on the Num Pad should work.

-, Shift+Z   zoom out

Left   pan virtual camera to the left (decrease yaw angle)

Right  pan virtual camera to the right (increase yaw angle)

Up     direct virtual camera further up (increase pitch angle)

Down   direct virtual camera further down (decrease pitch angle)

R      roll virtual camera clockwise

Shift+R  roll virtual camera anticlockwise

PgUp   go to zenith (the point straight above your viewpoint). This will
       land you in a 'black area' if your image does not extend to the zenith;
       it's mainly useful for full spherical panoramas
       If the image is in 'mosaic' projection, this key moves the view up
       some way. See 'comic book mode'

PgDown  go to nadir (the point straight below your viewpoint). This will land
        you in a 'black area' if your image does not extend to the nadir;
        it's mainly useful for full spherical panoramas
        If the image is in 'mosaic' projection, this key moves the view down
        some way. See 'comic book mode'

L      'level'. this sets the view's pitch and roll to zero, landing you
       on the horizon with a level camera. Of course, this orientation depends
       on what lux gleans from your input: if the metadata or the invocation
       is not correct or you have reset the 'bias' to a wrong orientation,
       lux has no way of finding the level position. also see F10.

J      sets camera roll to zero. This keeps the pitch but discards
       any changes the user has made to camera roll.

Shift+J resets camera roll to the value which it 'should have'
        given the user's previous interaction with the roll component.
        This may seem strange, but at times the correct roll
        can get lost and Shift+J 'sanitizes' it.

E      captures the current view to an image file. The snapshots will be
       placed in the same folder and overwrite existing snapshots with
       the same name. Snapshots done with 'E' (for 'exposure', or 'export')
       will show the same aspect ratio and part of the source image as the
       current on-screen view - if the viewer shows a window, the shape of
       the window will be the shape of the snapshot.
       Note that there is now an entire panel (the 'Export' panel) devoted
       to making snapshots, and you can usually get away without having to
       pass command line arguments, even though the following text mentions
       them.
       Per default, the snapshot will have as many pixels as the on-screen
       view, but you can apply a size factor to produce proportionally
       smaller or larger output (use --snaphot_magnification=...)
       lux' snapshot capabilities are quite sophisticated, please refer
       to the section discussing command line parameters for snapshots,
       which start with 'snapshot_'.
       Snapshots are produced by a dedicated thread (one per snapshot) and
       with the rendering code using less threads. This is to reduce their
       use of CPU bandwidth, so the rendition of the snapshot doesn't produce
       too noticeable a performance drop when animated sequences are shown
       while it computes in the background.
       The downside is that if you terminate lux just after having launched
       snapshots, program termination will be delayed until all snapshots
       are ready and saved to disk. 'tabbing' to the next queued image is
       also delayed until all snapshots from the current image are ready.
       The E key is used for all types of snapshot and will typically produce
       an output image which looks 'similar' to the view you get when the
       viewer is at rest. Up to 1.0.9 there were dedicated keys for
       snapshots of facet maps with stiched or exposure-fused content
       (P, U), which are now removed - lux now looks at the blending
       mode and does the snapshot accordingly, which is more intuitive.
       For 'normal' single-image views, snapshots simply contain what the
       viewer shows when it's at rest and snap_to_hq is enabled. With
       snap_to_hq disabled, snapshots are still made the same way, only
       the on-screen view is affected by the setting.
       For facet maps, the 'snapshot rules' are more complex. Facet maps
       are processed according to the 'blending mode' and affected by
       'snap_to_stitch' and 'snap_to_fusion'. Here are the rules, ordered
       by blending mode:

       - ranked blending: (used for panoramas)

         If snap_to_stitch is true, output will be blended with B&A image
         splining, otherwise 'fast blending' will be used. The output will
         look like the on-screen display when the viewer is at rest with
         snap_to_hq enabled.

       - hdr blending: (used for exposure brackets and focus stacks)

         If snap_to_fusion is true, output will be an exposure fusion
         or focus stack (which one depends on exposure_weight and
         contrast_weight).

       - quorate blending: (used for deghosting)

         the output will be a deghosted synopsis, and not affected by
         the snap_to_... parameters

       Here's another way to look at the rules: suppose you have
       snap_to_hq on, which is the default. snapshots will contain just
       the same content as what you see when the viewer is at rest,
       because snap_to_stitch and snap_to_fusion affect the still image
       and the snapshot in the same way. If you switch snap_to_hq off,
       the viewer will 'stick to' the image you get when the viewer is
       'moving', rather than showing a dedicated still image. The snapshots
       will continue to use the same process, no matter if snap_to_hq is
       on or not. So snap_to_hq only affects what you see on the screen,
       whereas snap_to_stitch and snap_to_fusion affect processing for
       snapshots and dedicated still images alike.
       How about HDR? If you select an output format which supports it
       (like openEXR), all output will contain the full dynamic range
       that the chosen process can produce from the input, irrespective
       of what you may see on-screen. The dynamic range of snapshots
       will only be cut off for LDR output. This may produce surprising
       results: you may increase brightness in the viewer to a point where
       it shows pure white and then take a snapshot to an HDR format. If
       you proceed to view this output, it will also show entirely white,
       but if you lower the brightness, you will see that you still have
       the entire content - only the middle grey of the HDR image was set
       so high that it 'looked' white when first loaded.
       When exporting HDR content to an LDR format like JPEG, the dynamic
       range will be cut off at the same maximum brightness that the viewer
       can display. If you want to retain the entire dynamic range, you need
       to do the same thing you need to do when taking a photograph: make
       sure that no part of the image is overexposed by reducing the
       brightness. The resulting snapshot may look too dark when displayed,
       but this can be remedied by increasing brightness. Depending on the
       image format, you may have quantization errors in the dark areas
       which produce banding when the view is brightened. To express it
       differently: if your view shows white and you take a snapshot to an
       LDR format, you won't be able to recover the content - lowering
       brightness will only produce uniform grey.

  E with Shift:

       Shift+E produces a 'source-like' snapshot. Such snapshots are images
       with the metrics and projection of the *source image* instead of the
       current view's. For facet maps, there are, again, special rules:
       If the input is a PTO file, lux will determine the output's metrics
       form the PTO file's p-line. The p-line contains what was specified
       by the program which made the PTO file (like hugin): the output
       projection, size and cropping. This is to make lux usable as a
       viewer and stitcher for PTO files.
       If input is a lux 'ini' file, there is no p-line, and the metrics
       will be derived from one of the facets instead: the first facet by
       default, or any other facet given with --snapshot_facet=...
       So these first two rules determine the metrics of the snapshot.
       What the viewer shows is irrelevant for a source-like snapshot's
       metrics.
       The *content* of the snapshot depends on the blending mode, and the
       rules are just the same as for 'ordinary' snapshots, depending on
       the blending mode and the snap_to... settings, please see above.

Y       If the image is in 'mosaic' or 'rectilinear' projection, this key
        scales the image to fit the view vertically and centers it.
        Also see 'comic book mode'

Shift+Y If the image is in 'mosaic' or 'rectilinear' projection, this key
        scales the image to fit the view horizontally and centers it.
        Also see 'comic book mode'.

Home   If the image is in 'mosaic' projection, this key scales the image to
       fit the view horizontally and aligns it's top margin with the view's
       top margin. Also see 'comic book mode'

End    If the image is in 'mosaic' projection, this key scales the image to
       fit the view horizontally and aligns it's bottom margin with the view's
       bottom margin. Also see 'comic book mode'

H      go 'higher'. to help with cropped panoramic images where the horizon
       is not in the vertical center. can be achieved by passing the right
       -y value in the first place, but if that wasn't done this can fix it.
       Mainly useful for stripe panoramas. Best to use a spherical and have
       correct GPano metadata, then the horizon will be just right. This
       key will only have an effect for some projections and cropped images.

Shift+H  opposite effect of H

V      toggle status line visibility. The status line can be annoying,
       especially during slide-shows. Pressing V will switch it off until
       it's switched on again.

G      toggle rendering quality control between automatic and manual. When in
       manual mode, you can increase global scaling with the M/Shift+M keys
       or by modifying 'animation quality' with the GUI, but in automatic mode
       lux will try and find a good value automatically. Automatic rendering
       quality is now the default.

M      reduce rendering quality. This does not change the field of view,
       but renders to a smaller frame which is magnified for display,
       which lowers CPU use at the expense of image quality; it's the
       best way I found to affect this compromise. In 'auto quality' mode
       this value is adapted automatically, so pressing M in auto_quality
       mode will not have a lasting effect.
       Note that this only affects the image generated with the 'fast mode'
       interpolator, which is normally used only for animated sequences,
       but see 'F12'.
       Reducing rendering quality is especially useful for complex synoptic
       views: rendering animated sequences of such views, like pans or zooms,
       will only be possible at low frame rates unless the rendering quality
       is lowered. Rendering quality as low as 1% will still be sufficient
       to convey an idea about what is viewed - it keeps you oriented. When
       the viewer comes to rest, the 'still image' mode kicks in and the
       view shows in full resolution, but this may take some time - even
       several seconds for complex multi-faceted views. The slowest renditions
       are from panoramas with stacks, where the stacks have to be blended
       first and the results stitched together.

Shift+M  increase rendering quality. Opposite of above.

D      toggle downscaling mode between 'classic' b-spline evaluation of
       the pyramid level nearest in scale, and 'area decimation', which
       uses an area filter with a window size of up to 2 on the nearest
       pyramid level below the chosen scale. There is also a command line
       argument for this: set --decimate_area=yes to get 'area decimation'.
       This mode of display is typically a bit slower than the 'classic'
       mode, but it tends to produce even smoother zooms whereas in
       'classic' mode the switch from one pyramid level to the next may
       be just about noticeable.

I      toggle use of 'magnifying glass'. When active, the center of the
       screen is displayed magnified 10X*. When pressed again, the
       magnification is lowered to 1/10. The factor 10 can be changed
       by passing a different magnification with -I on the command line.

Shift + I toggle sensor magnification. This is to show what's happening
          at the pixel level, not to simply zoom in by a set factor. Uses
          the same factor as above. with this magnification, the
          interpolator stays the same. You can use this to actually see
          how the change of one image pyramid level to the next affects
          sharpness at the pixel level when you zoom out.

9     rotate anticlockwise by 90 degrees. While you can serve a display in
      portrait orientation with frames rotated like this, the controls
      won't rotate as well: the arrow keys for example will not work as
      you'd want them to work for a portrait screen. If you want to use
      a display in portrait orientation, it's a better idea to change
      your system's settings accordingly and then launch lux, which will
      make the controls agree with the orientation. You can, of course,
      'adopt' the rotated view as your new 'bias' by pressing F10.
      Note that lux looks at your image's metadata to figure out if it
      is rotated in some way and loads the image to memory so that it
      is displayed correctly. This is helpful when showing JPEGs straight
      from the camera. The '9' action is more of a quick helper if the
      correct EXIF information is missing; a proper EXIF orientation tag
      is definitely much better. You can 'fix' the rotated view by pressing
      F10, which will make the controls work 'right' again.

S     shift interpolator spline up (results in softening)
      shifting is a technique to use coefficients of a b-spline with an
      evaluator for a different degree. The most common use would be to
      shift the pan-mode spline down from 1 to 0, to see pixels as actual
      rectangles, as they are shown when lux is invoked with -f0. Note that
      this only affects the interpolator currently in use:
      if you are in an animated sequence (panning, zooming etc.), pressing
      S or shift+S will affect the 'fast' interpolator, and if the image is
      at rest, it will affect the 'quality' interpolator.
      Note that shifting is not the same as choosing a different degree of
      b-spline for interpolation: if you're shifting down from, say, a cubic
      b-spline to degree zero, you'll see the spline's coefficients, not the
      original image pixels.

Shift+S  shift spline down (results in sharpening if degree > 1)

X     respond more strongly to user input

Shift+X  respond less strongly to user input

A     accelerate spin in automatic pan mode

Shift+A  slow down (brake) spin in automatic pan mode

F1    launch the show-again sequence. This will rebuild lux' state from
      scratch. Most of the time this will be very quick - lux is clever
      enough to reuse expensive resources. But if anything has changed in
      the meantime, the change will become visible. Let me give you an example:
      if you're displaying a facet map and change a facet's brightness or
      orientation in the PTO file and save it, pressing F1 will make the
      changes visible. And this won't take long at all, because here the
      expensive resources (the interpolators) don't have to be rebuilt.
      You can also just click into the 'override arguments' text field in
      the GUI and commit the empty field with Return to get the same effect.
      The actual image is not checked for changes, though - if you have
      modified it, you must change to a different image and back or relaunch
      lux before it will be read again from disk. Before the 'relaunched'
      image is displayed, lux will try and restore the viewer to the same
      state, so that view orientation, white balance, zoom factor etc. are
      the same as before. Most of the time this should work, but some
      changes (notably projection changes) may land you in an 'impossible'
      situation, and lux may even crash. I built in a sanity check, but
      I'm not entirely certain it will catch every 'impossible' situation.
      If you land in a 'black area', try and press 'Return' to get to an
      area with visible content.
      Note that this action does *not* look at the image's creation time,
      so it won't notice if you have modified one or several images relevant
      to the current view. The show-again sequence will always assume that
      the images themselves have not been modified.

F2    normally, vertical lines will remain vertical. this behaviour can be
      switched off/on with this key. sometimes useful when near the poles.
      For 'mosaic' images, this is the default. 'letting go' of the vertical
      makes work 'near the poles' easier - such work can 'collide' with the
      viewer's default of holding the vertical. So if you want to explore
      parts of your image near the pole, press F2, do your navigations,
      and then press F2 again once you're done, plus 'J' to 'regain' the
      vertical (just pressing F2 will leave the camera roll as it is).

F8    toggle permission for pan mode. panning will function with or
      without 'pan mode' - it's just a more efficient way of calculating
      frames which can be used for certain situations. on by default.

F9    apply simple tonemapping to HDR display. more of an emergency measure.

F10   orientation bias. captures the current view as the new image center
      and orientation reference. useful for 'wobbly' horizons. 'Return'
      will now return to this view, rather than the one showed when the
      program was first launched. Note that you should only use this
      feature when your view has the *horizon level and centered vertically*.
      If an image comes out rotated by a multiple of 90 degrees due to
      an incorrect or missing JPEG orientation tag, press '9' until the
      orientation is right and then press F10.

F12   toggle snap_to_hq_interpolator on/off. When on (default), lux uses high
      quality interpolation when not moving, when off, the 'fast mode'
      interpolator is used throughout. At times you may prefer that, since
      the switch from one interpolator to the other can be quite noticeable.
      With the recent introduction of 'snap-to-stitch' and 'snap-to-fuse',
      synoptic displays will, by default, be 'properly' stitched/fused with
      multilevel blending, using an adpted version of the Burt and Adelson
      image splining algorithm. This is done in the background, and only
      when the viewer is at rest and has no background tasks running.
      If you switch snap-to-hq off, this will not happen - instead the
      ordinary 'live view' will be displayed when the viewer is at rest.

Q     level bias. see the '-B' flag for an explanation. Q raises the level
      bias by 0.5, Sift+Q lowers it by 0.5. This is new and experimental,
      I'm not sure if I'll keep providing this feature.

Shift + Left/Right:  facet solo mode: move to previous/next facet.
                     Note that you may get a black screen if the 'solo'
                     facet is not visible with the current view. If you
                     proceed beyond the last facet, you'll also get a black
                     screen. Going 'all the way back' will finally activate
                     'synoptic' mode showing all images together.
                     These key combinations will now also navigate in a
                     video and move to the next/previous single frame.
                     Moving backwards in a video may be quite slow.

!!!! NEW GUI CODE !!!!

After developing a new GUI for lux for some time in the 'imgui' branch, I've now convinced myself that this works well on all platforms I support and that it is superior to my own GUI - the 'lux legacy GUI', a strip of buttons at the top of the screen. So I've now merged the imgui branch back into master. The old GUI is still available (pass --legacy_gui=yes on the command line, or press 'C' to toggle between the old and new GUI) but it may go eventually. What I haven't yet managed is to write documentation for the new GUI - it's mostly self-explaining via ample tool tips, though. The mouse and keyboard commands are the same as before, only the garphical elements are new and consist of several 'panels' giving access to common settings and options.

***** old text, pertaining to the old lux GUI only:

click on 'SLIDES' to toggle slide show mode. If the button is grey, slideshow mode is off, and you have to manually 'tab' to the next image. If the button is green, slideshow mode is on and there was no 'manual' interaction with the view, so the next image will come on after the slide interval has run out. If the button shows yellow, slide show mode is on but there was 'manual' interaction with the view. Such interaction 'suspends' the slide show for the current image, but if you 'tab' to the next image, it will be on again. If you click on the 'SLIDES' button while it's yellow, the next image will come on with active slide show mode.

This sounds complicated, but you'll soon find that it's a good way of running a slide show: if you switch slide show mode on via the GUI, you don't want to wait another x seconds until the image changes. If you want to interact with the view, you don't want the slide show to jump to the next image while you're interacting. If you've had enough looking at the current image, just press tab, and if you were in slide show mode before, the slide show will continue without you having to switch it on again. You'll get used to this quickly and probably ask yourself why anyone would want to do it differently ;)

Going to the previous image with Shift+Tab will also suspend the slide show: if you go back you usually have a good reason to do so and want to look at the image closely - you don't want the slide show to move on while you're just doing so. Press Tab to move on.

How can you just stop the slide show? Any interaction will do, and pressing 'Return' counts as an interaction, even if it may not produce a visible effect. A single primary-button click with the mouse also does the trick.

slide show mode will be on automatically if you launch lux with -d, passing a slide interval in seconds.

Override arguments:

There's an unlabeled field at the bottom right of the GUI elements. Here you can enter command line arguments. Once you're done, press return to commit, and the current image will be displayed again with these arguments prepended to the image file name, reusing the current interpolator if possible. Note that you have to commit with Return while the GUI is still visible, because key strokes etc. are only routed to the GUI while it's visible on-screen. This also holds true for numerical inputs in other GUI fields.

*** end of text pertaining to the old GUI only

Gradation and white balance control commands:

F5 and F6 will make the display brighter or darker. F3 and F4 apply to the black and white point. It's probably more convenient to use the GUI for the black and white point, and with the GUI you can also directly enter numerical values. Note that lux uses a range from 0 - 255 for the range the display can handle, no matter what the image file may contain: 16-bit TIFF and openEXR images will be converted to values based on this range, which makes the final conversion to 8-bit RGBA used by the viewer faster and gives you a uniform handle. Of course values from openEXR files may exceed 255, and internal processing will not 'cut off' high values; forcing the values into a specific range is only done when it becomes necessary: for on-screen display or when storing snapshots in fixed-range formats like JPEG.

White balance control is done by choosing a view containing 'neutral' content and then pressing 'W'. The white balance will be modified so that the average of the displayed view will have equal colour components. So what's 'neutral' content? This may be a small area you know to be white or grey and onto which you zoom before pressing 'W', but at times just taking the white balance from a 'normal' view produces good results. The 'W' key works like the one-click white balance control you find in some image processing packages, only that it does not use a single pixel as the reference, but the whole frame that's displayed. Since you can zoom in as close as you like, you can use an idividual pixel as your reference, but oftentimes you want a larger area. By using the current view, you don't need some dialog where you might choose the 'radius of the white balance reference area', it's an intuitive way to get the job done, and it's just as easily undone by resetting the white balance with Shift+W. Note that the quickest way to zoom to a specific small spot is this: right-click on it to center it, then press '2' repeatedly until it fills the view, then press 'W'. Finally, press e.g. '1'.

The white balancing code simply sums up intensity values, so bright pixels will have more effect than dark pixels. Take care when selecting a white area: if it's white because the image was overexposed in that place, you won't get a valid white balance from it. It's usually better to select a light or medium grey area. In landscapes, you often have shaded and sunny areas in the same image. This makes it difficult to pick a 'grey' area: would you pick the blueish grey from the shade or the yellowish grey from the sunny area? In such situations, try and set the view to a large part of the image containing both light and shade and do the white balance with that view.

Pressing Return or F7 will reset brightness, gradation and white balance.

Note that, when disabling internal linear RGB processing (by passing -l on the command line), gradation and white balance manipulations aren't mathematically correct. The -l flag is to squash processing times by working directly on sRGB data, whereas with internal linear processing the last stage of the pixel pipeline has to convert the linear data to sRGB, which takes some time. If you have issues with wrong colours, try using --process-linear=true.

W     glean white balance from current view

Shift + W reset white balance

F5    darken the display

F6    brighten the display

F3, Shift+F3: raise/lower black point

F4, Shift+F4: lower/raise white point

F7    restore the default brightness, black and white point and white balance

F9    toggle use of tonemapping operator. tonemapping is off per default
      and should only be used in linear light. (--process_linear=true)
      The simple tonemapping provided here will compress the dynamic range
      so that pixels up to +Ev2 will not white out, but contrast suffers
      and colours are dulled.
      Exposure fusion will produce much better results, but it will take
      much longer to compute.

Note that - provided your system can do so - lux can detect several keys pressed at the same time. So pressing F3 and F4 together, for example, increases contrast.

Sensor tilt commands: To help with collapsing lines and similar perspective problems, lux allows sensor tilt. Other Sensor manipulations are currently unsupported; lux has code for sensor shift and sensor resize, but the UI does not yet use these features. pressing 'Return' will undo sensor tilt. Currently, sensor tilt can only be used for single-image views. Sensor tilt will not have an effect with 'mosaic' mode images. To get rid of collapsing lines, first make sure the vertical center line of the view coincides with vertical content. Then press Shift+Up/Down until all vertical lines are indeed vertical. This will only work well if the collapsing lines aren't too pronounced, because it also elongates the view along the vertical. It works best with views of plane surfaces, like quick snapshots taken of signposts which you want to transform back to rectangular shape.

Shift + Up/Down rotate sensor around horizontal axis

Zooming, panning, scrolling and rotating are 'chronic' interactions and will continue while the key is held. This is also true for brightness and white/black point control and sensor tilt. Many other interactions are 'acute' and will occur once per key stroke (or it's automatic repetition by the keyboard driver). New lux users aren't always aware of the fact that 'chronic' interaction is done by pressing and holding keys or buttons and just get little leaps and twitches. And they often aren't used to position control by click and drag displacement, expecting rather that the image should act as if 'glued to the mouse cursor'. I might offer this as alternative behaviour at some point, but think it's best to learn using click-and-drag-displacement, because it makes long, smooth animations easy, even if they lead to places well outside the current view. Doing that with the image glued to your mouse pointer is simply not possible. You'll just have to 'learn to fly' ;)

The display will normally show a rectilinear view, as if taking a picture of the view with an ordinary lens. This looks natural if the horizontal field of view matches the horizontal viewing angle to the screen, give or take a bit, but especially with large fields of view (zoomed far out) the view becomes unnatural (unless you go very close to the screen). If you need to show a wide field of view - for example when producing banners - you can use a different target projection; see --target_projection.


Comic Book Mode

I've recently started adding functions to lux to make it usable as a comic book reader. This is still experimental. lux can't open cbr or similar archive formats, so you'll have to first extract the images from an archive. Next you open the whole set in lux. You'll see the whole page displayed and you can tab through the images like any other image series. But oftentimes you'll want to look at the images in close-up, filling the screen horizontally. To fill the screen horizontally, you can use Shift+Y, but that will land you in the image's center. It's better to use the 'Home' key, which also aligns the image's top margin with the view's top margin. From there, you can just press 'Page Down' repeatedly. Pressing 'Page Down' while the bottom of the image is displayed will jump to the next image's top, so you can 'Page Down' through the whole set. 'Page Up' works the opposite way, and the 'End' key gets you to the bottom of the current image. Anytime you want to see the whole image again just press Y. I find this a pleasant way to get through a comic book, but I haven't used it much yet, so I may change the heuristics. This interpretation of the keys is used whenever the images are in 'mosaic' format, which is what lux falls back to if there is no projection information. You may find it a pleasant way to get through other (sets of) images in portrait format as well. It's also a good way of looking through scans of documents which are usually in portrait orientation. Note that with a 16:9 display, three steps vertically may be a bit too few. You can remedy that by using a windowed display with 'narrower' aspect ratio, which will make the three steps cover a wider area vertically.


Cubemaps

There is a recent addition to lux' set of supported input formats: cubemaps. These are commonly used in openGL to provide a 360X180 degree 'backdrop'. lux does not use openGL cubemaps, but instead provides it's own implementation, rendering the view on the CPU, using lux' 'normal' decimators and interpolators. With appropriate cube face images, the result is a seamless rendition with all the capabilities lux has to offer for single-image data, like alpha channel processing, selectable spline degree, image pyramids, etc...

Why bother? Because cubemaps are quite efficient to render. The geometrical transformation gets away without any transcendental functions, and the time spent for facet detection and other mathematical frills is not too much either. If you have problems getting some projections to animate smoothly, you might consider reprojecting them to a cubemap to squash rendering time. This is especially effective for 'difficult' transformations like the fisheye projection. Another nice property of cubemaps is that the memory access patterns don't have particularly bad worst-case scenarios (like near-pole views on full sphericals or back pole views on fisheyes). Using cubemaps is mainly a tool for speeding up rendering; for production of high-quality stills, other projections are better - and the conversion from some other projection to a cubemap always costs a bit of quality.

Cubemaps, in lux, are introduced via an 'ini file', which is passed as an image. This ini file must contain all the information about the cube faces which lux needs to set up the viewer correctly. By bundling this information in a file, this file becomes - to lux - just another source image, so the usual mechanisms of enqueueing such a 'meta image' in a multi-image session - interpolator reuse on 'replay' etc. - apply. Given the right ini file plus the cube face images, there is no need to be aware of the inner workings, and such data sets can be shared easily (just put the lot in a folder).

Setting up an ini file for a cubemap is straightforward. First you need six square images for the cube faces. These images have to be in rectilinear format and they have to have a field of view of at least 90 degrees, but it's preferable if the field of view is slightly larger - just half a degree is fine. Why the extra? because with precisely 90 degrees, interpolation would have to span several cube faces when showing content near the cube's edges and vertices, which is a hard mathematical nut to crack. If the source images are slightly larger than 90 degrees, interpolation can just rely on a bit of continuation for each cube face and always base the interpolation on a single face. lux has no special code to interpolate over face edges, so to get the best result, use slightly larger cube faces. That said, spotting the artifacts I am talking about isn't usually easy and you may well not care. Try images with precisely 90 degrees fov to see (or not see) what I'm talking about!

Let's assume you have a full spherical and want to make six suitable cube faces. Start out launching lux with a command line like this:

lux --snapshot_prefix=cube \
    --hfov_view=90 \
    --auto_position=no \
    --fullscreen=false \
    --window_width=960 \
    --window_height=960 \
    --snapshot_magnification=2 \
    --projection=spherical \
    --hfov=360 \
    --image=spherical.jpg

Here's an explanation what the parameters do:

--snapshot_prefix=cube is simply to produce filenames for the snapshots starting with 'cube'. --hfov_view=90.5 sets the display window's field of view to 90.5 degrees, so a tad over 90 degrees, as explained above. --fullscreen=false starts lux in 'window' mode, rather than in fullscreen mode. -window_width=960 and --window_height=960 set the size of the display window. 960 is a good value for a fullHD screen, because it's small enough to fit the screen. --snapshot_magnification=2 will produce snapshots which are twice the size of the display window, so you'll get snapshots measuring 1920X1920. The remaining parameters set the projection, hfov and filename of the full spherical you want to process.

Now it's just a matter of pressing a few keys, here's the sequence:

E 4 E 4 E 4 E 4 PgUp E PgDown E

Confused? 'E' translates to 'do a snapshot', pressing '4' moves the view by one quadrant, and PgUp/PgDown move the view towards the Zenith or Nadir, respectively. You now have the cube faces in six files named cube1.jpg to cube6.jpg. Another way to create the cube face images is to write an ini file for each face, pass the ini files to lux and do a snapshot of each. This is also a good approach if you consider setting up a shell script to do the job. You may also want to consider the 'next_after_snapshot' argument in this context.

Next you create the ini file:

projection=cubemap
cubeface_fov=90
cube_front=cube0.jpg
cube_right=cube1.jpg
cube_back=cube2.jpg
cube_left=cube3.jpg
cube_top=cube4.jpg
cube_bottom=cube5.jpg

Save the file as 'cubemap.lux'. Now you can launch lux like this:

lux cubemap.lux

If you're curious, you can repeat the process with a field of view of precisely 90 degrees - note that you'll have to omit the 'cubeface_fov' line in the ini file or pass cubeface_fov=90 for this trial. When you zoom in to the cube's vertices, you may be able to spot the discontinuities. To help with spotting them, use a low-degree interpolator (-f0 -q0). With the 'normal' interpolators, the discontinuities are just about visible for 90 degree cube faces, but for larger cube faces (like in the example above), it should be very hard to even find the right place.

Note how in the example above we have used the window_width and window_height arguments together with --fullscreen=false to get a square view. If your screen is too small, your system may squash the viewing window to fit the screen. So make sure your cube face images are in fact square. If you want the cube faces in smaller or larger resolution, use a different 'snapshot magnification': the default is 1, in the example above we used 2.

Note also how the cube faces have to be oriented: There are simple rules to follow. It goes like this: picture yourself inside the cube. All faces 'around' you should be upright when you face them. When facing the front face, looking up or down should show the top or bottom face 'the right way round', so you can sweep your gaze from zenith to nadir with the cube faces strung up in the 'right' orientation. lux avoids using a fixed numbering scheme (like, 'the first cube face is the front one') and relies on symbolic names instead (cube_front etc.) to help you get it right.

A word of caution: you shouldn't have lux display a cubemap by passing the cube faces on the command line. Best use an ini file for the purpose, just like in the example above. If you have to display a cubemap with command line parameters only, you can do so by passing all arguments you'd have in the ini file (with '--' prepended) plus one 'dummy' image. This will show the cubemap, but you can't show other images in that session - the arguments pertaining to the cubemap will always 'win'. Without the 'dummy' image, the lux invocation will fail altogether, because lux expects at least one image file in every invocation.

While cubemaps are 'meant' to provide a 360X180 representation view of a scene, you can also use unconnected images and 'misuse' the cubemap as an effect, placing six images in the six cardinal directions.

The implementation of cubemaps in lux is a step towards handling more complex 'facetted' images: cubemaps are just one of an infinite number of schemes where a view is composed from 'facets' on some convex polyhedron around the view's origin. The main problem with such schemes is detecting (quickly) which facet is to be used as the source for a given output pixel. With cubemaps, this can be done quickly, but with arbitrary facets, more effort is needed (like comparing the 'view ray' to all face normals) which can take considerable time. The saving grace here is the nature of facetted images: Most of the time, successive pixels lie on the same facet, and this fact can be exploited to slash facet detection time. Please have a look at the cubemap code in lux if you're interested in how this is done, and proceed to the next section to find out more about 'facet maps', which implement such multi-image views with arbitrarily placed 'facets'.


Facet Maps - Synoptic Images like Panoramas ans Exposure Fusions

Facet maps are similar to cubemaps, but here the number and orientation of participating partial images - the 'facets' - can be chosen arbitrarily. Additionally, facet maps offer facets with differing projections and selectable brightness and other characteristics. To the mathematically inclined: a facet map in lux is similar to a Voronoi diagram. There are, conceptually, two routes to synoptic images: the images can be layered on top of each other, each contributing to every pixel in the result. This is what exposure fusion, HDR blending and deghosting do. It's like layer processing in an image processing program. Or the images can be assembled like a patchwork, picking content from one specific source for each pixel in the resut. This is what happens with panorama stitching. The distinction is not as clear-cut as this - panorama stitching does combine pixels from several images to get a 'seamless' blend - but the distiction should be clear. To put it differently: image fusion produces a weighted sum of several images, deriving the weights from qualities intrinsic to the images (like, their brightness or sharpness), whereas stitching produces weights from spatial criteria. What's common to both types of synoptic imagery is that several source images are involved (the 'facets') and a single image is the result - hence the term 'synoptic'. And what's also common to both types is the notion of the weighted sum of several images - this is not as obvious for panoramas as it is for image fusions, but it's still the case, only that for panoramas, typically one image 'wins' and determines a target pixel more or less exclusively, whereas in image fusions, target pixels are more likely a blend of several source image pixels.

lux uses two different routes to provide synoptic imagery: a fast one, which is used for animated sequences, and a slow one, which produces high-quality results but is slow to compute. The high-quality route is done using a modified version of the Burt&Adelson image splining algorithm - it's used for both image fusion and stitching, only the weight generation is done differently.

The quickest route to facet maps is using PTO files as input: to lux, the PTO file describes a facet map. You can just open PTO files like any other image file and you needn't know much about facet maps to merely look at the data, but please be aware of the fact that lux only processes a subset of PTO format:

  • orientation (yaw, pitch, roll only, not translation)
  • horizontal field of view
  • exposure value
  • projection (only those projections known to lux, and not 'mosaic')
  • lens correction parameters
  • vignetting control parameters
  • source image cropping
  • source image masks
  • stacks (in animated sequences, only the 'stack parent' is displayed)

With the introduction of OpenImageIO, lux can now display and process PTO files referring to raw camera images (like .CR2 files). This is a new feature and requires a PTO file with parameters fitting the raw image's metrics. If you have a PTO file with TIFF image made with dcraw, the parameters should fit and you can replace the .TIFF extensions with the raw file's extension and then reopen the PTO file with lux (it won't work with hugin that way). But beware: the change my go wrong, depending on whether you used dcraw to auto-rotate the input when you made the original TIFFs or not. If some or all images are oriented wrong, you'll have to pass a parameter to OIIO to mend that. The default in lux is to autorotate, so when that goes wrong, use --oiiio_arg=raw:user_flip=-1. With this syntax, you can also introduce other parameters to OIIO's RAW plugin - and to other OIIO plugins as well. The command above can be understood like this: --oiio_arg=... means 'pass the following to OIIO'. The rest, after the '=', is what is passed to OIIO, in this case the raw plugin's user_flip parameter is set to -1. Pass 1 instead of -1 to turn autorotation on.

Another point you need to keep in mind is that lux will glean from the PTO what it can and take it from there - but it will simply ignore information in the PTO which it can't process, like colour profiles or EMOR camera response curves. lux will give the PTO it's 'best shot', which is oftentimes perfectly good enough, but don't expect it to be a complete drop-in replacement for more sophisticated image fusion or stitching software. I do make an effort to evolve lux' capabilities in this regard, but since I don't rely on libpano as a backend, I have to rewrite everything and this takes time.

If any non-zero translation parameters are found in a PTO file, lux will terminate, because simply ignoring the translation parameters would render a geometrically wrong image. Same holds true for lens shear parameters (g and t fields in the PTO file). Note also that lux ignores stack assignments in animated sequences - blending the stacks 'on the fly' would take too long. In animated sequences, only the 'stack parents' are used. Only when the viewer is at rest, proper stack processing is done (unless it's disabled) and the result will show as soon as the rendition is complete.

Note that if the image files are in a format which can hold linear RGB or sRGB data, you have to specify --facet_is_linear=yes or =no, either once, which is taken for all facets, or once for each facet. Colour spaces apart from linear RGB and sRGB are currently not supported. Note also that the Ev values in a PTO file may not be entirely correct for lux if the input is not in linear RGB - probably due to the use of an EMOR camera response curve, which lux also ignores - use Shift+L to let lux calculate the brightness values as best as it can, if the Ev values seem wrong, but don't expect 100% success - especially not if the images aren't in linear light. Slightly misadjusted brightness will show in animated sequences; when multi-level blending 'kicks in' brightness will vary smoothly, so the misadjustment will become less visible.

Circular cropping of circular fisheye images and rectangular cropping of other source images is honoured as of lux 1.1.0. This is done via an alpha channel manipulation (--alpha=yes is implicitly set), which will make processing somewhat slower, but the output should be as expected. Together with a bug fix concerning the ranking of facet maps with lens shift (PTO d and e values), lux can now display PTO files for 'dual fisheye' images where both halves of the full 360 degrees are in circular half-images next to each other - provided that the PTO assigns correct cropping and shift to the two instances of the single input image. With lux 1.1.1, image stacks and all types of masking used in PTO format will be supported.

Now for the 'blending mode'. By default, this is selected automatically (--blending=auto). with this setting, lux looks at the images' hfov, yaw, pitch and Ev and uses a heuristic method to determine whether the images constitute a panorama (blending=ranked), an exposure bracket for HDR blending or a focus stack (blending=hdr) or an image series for deghosting (blending=quorate).

Focus stacks are recognized by exposure_weight being zero and contrast_weight one, on top of similar position and Ev value, and they also use blending=hdr even though this is a bit counterintuitive (TODO rename to blending=fuse). If the automatics don't work for you, you must pass the blending mode explicitly using --blending=... on the command line.

If you make modifications to the PTO (e.g. by manipulating it with hugin and saving the file) just press 'F1' in lux to refresh the view and show the changes. So you can use lux as an 'external preview' to a stitching session: once you've completed your actions in the stitcher, save the PTO, then switch to the lux session and press 'F1'. This may not work reliably for every change - changes to source image cropping for instance do currently still need a complete reload.

Note that any specifications given in hugin or similar programs as to what output should be produced from a PTO file (like, single warped images, panorama etc) are ignored by lux: it will always produce a single blended result. lux also ignores images which are 'switched off' in hugin. To use such PTO files with lux, simply make a copy and remove the inactive images, then pass the modified copy to lux.

While facet maps can be used to show a collection of tailor-made synthetic images forming a larger image - which uses the facet map just as a format to store and present this larger image - they have a more important use: they allow 'live stitching', the display of a set of registered source images as if they had already been stitched into a panorama. This is a great way to verify image registration, and with suitable source images which 'fit' very well, it can even make stitching unnecessary, because the 'live stitching' already looks 'good enough'. Maybe a quick explanation of 'image registration' is in order here. This is a technical term describing the process of correlating each image's pixels with rays from the viewer's position. Images which share content will share rays, and this is what 'control points' are: they note prominent points in two 'overlapping' images which 'share the same ray'. So when you are building a panorama in a stitcher like hugin, the process of finding control points and then optimizing image characteristics like yaw, pitch, roll, fov, etc. is just that: 'image registration'.

The current level of evolution of the facet map code is not yet en par with 'proper' stitching, lacking features like camera response curves and colour profile processing, but most important features are already present. Producing snapshots of the 'live stitch' can produce panoramic images, and due to the selectable output projection, simple stitches of, e.g., full sphericals are now possible with lux. More sophisticated stitchers like enblend will do 'seam optimization', which tries to place the seams between overlapping partial images so that they are hard to spot. They also use 'multi-level blending', which makes the transitions smoother. These two techniques are computationally demanding, and they would take far to long to calculate 'on the fly' in a viewer like lux. So to hide seams in animations, lux relies on a simple technique called 'feathering', which simply 'crossfades' from one image to the next. This is optional, because it's also computationally demanding - the default is to simply 'cut' the images to facet borders, and unless your images fit perfectly, you'll see the borders. When at rest, lux will actually stitch/fuse the source images 'properly', which takes a little while. 'Properly' means that the images are run through a modified version of the Burt&Adelson image splining algorithm, producing a - hopefully - seamless blend. So when you navigate in the view, lux will show you a rendition which is fast to compute to keep you oriented, but when the viewer is at rest, the slower, more complex methods are used to render a high-quality still image. This is called 'snap-to-stitch', or 'snap-to-fusion' for exposure fusions, and the feature is on by default, but you may deactivate it on the command line (especially snap-to-fusion, which can be annoying).

A word about seam optimization: the 'conventional' stitching method first 'warps' the partial images to the target projection and size. Seam generation and optimization is then done by correlating these warped images, which may produce strange results if the 'warp' is very strong or rips single source images apart into several bits spread out over, e.g. a full spherical. lux avoids these issues by performing the seam generation strictly by geometry and in a 3D data model resembling a voronoi diagram. This produces seams which are inherently well-suited, unless the images don't fit well due to flaws in the photographic process - in such cases the seams derived from the voronoi diagram do at least constitute a 'reasonable' choice. As a rule of thumb, the seams are placed so that the seam is equidistant from the center of two facets touching each other. This has several nice mathematical side-effects and is often the optimal choice, so the lack of seam optimization in lux is less of an issue than one might think. Well-fitting, well-registered image sets should stitch very well. If the input images have different field of view, images with smaller field of view are additionally favoured by default, to allow automatic 'tele inserts'. Instead of employing 'strict' voronoi diagram mathematics, lux employs 'ranking fields' which assign 'rank' according to more features than just distance-to-center, producing a cross-breed of a 'shallow cone' over most of the image (which corresponds to distance-to-center) and a 'steep pyramid' near the edges. The 'shallow cone' part is elevated more or less, depending on hfov. This default ranking can be overruled by assigning other prioritization modes, see the 'facet_priority' argument. There's another point to make about how images are blended in lux: enblend blends first two images, then adds the third to the result, then the fourth, and so on. lux calculates a set of layers which are subsequently added up, and the sequence does not matter. I think that is cleaner.

lux' 'native' way of describing a facet map is with a 'lux ini file'. This will rarely be used - the information is best generated with software like hugin and passed in as a PTO file. If you want to change the way the PTO file is interpreted, you can pass additional parameters on the command line, or 'wrap' the PTO file in a lux ini file with additional parameters, introducing the PTO file with a line like

image=my.pto

Here's the description of the parameters you can use in lux ini files to describe facet maps, even if you'll rarely want to do so:

When describing facet maps with a lux ini file, this file has to set projection=facet_map, and for each facet you need to specify the image's name and properties. For each facet, you must pass the image file's name by specifying 'facet=...". The remaining properties are all passed via options like 'facet_*', where '*' stands for a specific property. The facet properties all share a common scheme: if you don't pass the option at all, a default will be picked for all facets, if you pass it precisely once, this value will be used for all facets (instead of the default), and if you pass it more than once, you have to pass it once for every facet. Here's an example:

projection=facet_map

facet=image1.jpg
facet_projection=rectilinear
facet_hfov=50.0
facet_brightness=1
facet_yaw=0
facet_pitch=0
facet_roll=0

facet=image2.jpg
facet_projection=rectilinear
facet_hfov=50.0
facet_brightness=1.2
facet_yaw=30
facet_pitch=0
facet_roll=0

...

Another fast route to a facet map is to use images which are already 'remapped' so that they fit on top of each other - in other words, their facet_roll, facet_yaw etc. values would all be the same. You can write a simple lux ini file to pass such an image set to lux, here's an example for an image set constituting a bracketed shot after remapping, where the remapping may have been done with lux or another program like hugin's 'nona'.

projection=facet_map
blending=hdr
facet_projection=rectilinear
facet_hfov=66
facet=IMG_1.JPG
facet=IMG_2.JPG
facet=IMG_3.JPG

... where the above is about the bare minimum.

Writing such a lux file is easily automated, so writing a script which employs lux for the blending stage only is simple, just create a temporary lux file, call the blending code with a 'lux action' and delete the lux file, à la:

echo ... > temp.lux
lux --fuse temp.lux
rm temp.lux

Of course you can write a PTO file to the same effect, where the remapped partials share all geometrical parameters. This is up to you - writing the lux file is probably easier, but you can use existing tools to create a PTO file from the partials, which is probably easier than writing a lux file ;)

To reiterate: this method will not work for panoramas, because lux can't (yet) do seam generation or seam optimization. Panoramas need the yaw, pitch, roll, etc. information in the PTO file or lux ini file to stitch the images, lux can't stitch remapped images into a panorama. For hdr merging, exposure fusion, focus stacks and deghosting it's a valid option, though.

Such a facet map is slightly 'wasteful' computationally: the coordinate transformations are calculated for all partial images separately, even though a single calculation would suffice, because all partials share the same surface. I may add streamlined code to process such pre-remapped image sets, but for now you'll have to live with the suboptimal performance if you decide to use 'external' remapping code, which will slow you down a good bit anyway, because the intermediate images will have to be stored to disk and reread. Going that route is only sensible if your PTO files use features which lux does not offer.

The list of facet features lux can process is growing, but as of this writing, the following options are recognized (also refer to lux_options):

--facet_projection={rectilinear|spherical|cylindric|stereographic|fisheye}

  Sets the facet image's *source* projection. Note that this is different
  to the 'target projection' of the view, which determines how the data are
  presented to the viewer.

--facet_hfov=<real>

  This option sets the horizontal field of view of a facet image.
  For some facet types this can be up to 360 (the value is in degrees).

--facet_brightness=<real>

  brightness as multiplicative factor in linear light. This is not
  to be confused with the exposure value, which is a logarithmic
  value. With each Ev step, the facet brightness doubles.

--facet_is_linear=<yes/no>

  facet is linear RGB (yes) or sRGB (no). This option is only relevant
  for TIFF images; other formats are either linear or not. lux does not
  handle ICC profiles or camera response curves - your imput has to
  be sRGB or linear RGB. This option is especially relevant for HDR
  merging, when HDR-merging linear RGB TIFFs made from RAW images.

--facet_yaw=<real>
--facet_pich=<real>
--facet_roll=<real>

  These options set the orientation of a facet. The values are in
  degrees and are relative to the viewer 'in rest'.

--facet_handicap=<real>

  Pixels on facets are 'ranked' by their distance to the facet's
  center (measured in model space units). A handicap value is added
  to the distance, resulting in worse ranking. This mechanism can
  be used to deliberately have some facets show 'in front of' other
  facets (if facet_priority is set to "explicit", see below)

--facet_priority={none|explicit|hfov|order}

  Per default, 'facet_priority' is set to 'none', which results in
  ranking by distance-to-facet-center only. Using 'explicit' requires
  specific per-facet values of 'facet_handicap'. Using 'hfov' will
  produce handicaps correlating with the facet's hfov, so that facets
  with large hfov will receive large handicaps. This puts facets with
  small hfov in front of facets with larger hfov, an effect which is
  desirable when adding a few tele shots of detail to some wide-angle
  background. Finally, passing 'order' will add handicaps by facet
  number, so that facets with low numbers will occur in front of facets
  with high numbers.

--facet_lca=<real>
--facet_lcb=<real>
--facet_lcc=<real>

  These options are used for lens correction parameters a, b and c,
  as they are used in panotools. Lens correction and vignetting
  correction are typically set when PTO files are processed and
  a 'live stitch' of real-world photographs is intended, they will
  rarely be useful with synthetic images.

--facet_lch=<real>
--facet_lcv=<real>

  These options are used for lens correction parameters d and e,
  as they are used in panotools. I've chosen 'h' and 'v' for
  'horizontal' and 'vertical' to set them apart from an internal
  parameter 'd' also used for lens correction.

--facet_lcs=<real>

  This option sets the lens correction polynom's reference radius.
  The panotools-compatible value is 1.0 and makes the reference
  radius half the shorter edge of the image.

--facet_squash=<integer>

  Passing a value larger than zero here will result in removal of
  stages of the image pyramids used for interpolation. This can be
  desirable if the images are in unnecessarily high resolution and
  there is not enough memory. This will (currently) not affect the
  'primal' pyramids, so if 'build_pyramids' is set to 'no', this
  option won't have an effect. The effect is the same as what's
  done to a single image with the --squash=... argument, but for
  facet maps we allow per-facet values. If no per-facet values
  are passed on the command line, but a 'global' squash value is
  passed with --squash=..., this value will be 'rolled out' to all
  facets.

--facet_vca=<real>
--facet_vcb=<real>
--facet_vcc=<real>
--facet_vcd=<real>
--facet_vcx=<real>
--facet_vcy=<real>

  These options set panotools-compatible vignetting correction values.

--facet_crop_active=<yes/no>

  If facet_crop_active is set to yes, the following arguments
  are used to calculate an alpha mask for the facet. If input is a
  PTO file, cropping info from i-lines is translated into the
  facet_crop... arguments.

--facet_crop_elliptic=<yes/no>

  Determines whether the alpha mask should be elliptic or rectangular.
  If input is a PTO and the facet's projection is a circular fisheye,
  facet_crop_elliptic will be set to true. In this case, the cropping
  extent will define a circular cropping - lux is more flexible and
  also accepts elliptic ones, which is a superset.

--facet_crop_fade=<real>

  If set to a value greater than zero, the facet will be masked out
  with a feathered mask (it will be 'faded out'), where the fade-out
  occurs over a zone of about as many pixels as the value passed.
  The default is a hard mask, which is usually fine because the
  margins will rarely make it into the final image. But if the
  margins are visible, the staircase artifacts of a hard mask can
  be quite ugly, especially for elliptic masks - rectangular masks
  don't show staircase artifacts, because the edges of the mask
  are always straight horizontals or verticals.

--facet_crop_x0=<real>
--facet_crop_x1=<real>
--facet_crop_y0=<real>
--facet_crop_y1=<real>

  Defines the horizontal and vertical extent of the mask, both for
  elliptic and rectangular cropping. Values are in pixel units and
  refer to source image coordinates.

At times you may wish to see only a specific image out of the set of facet images in a facet map. This can be done by passing --solo on the command line, or via the UI or GUI:

--solo=<integer>

  This option will only display the given facet. While a facet map is
  displayed, you can change the 'solo' facet with the GUI or by pressing
  Shift+Right or Shift+Left. The --solo command line argument will rarely
  be useful for 'normal' viewing sessions, where 'solos' would be
  initiated via the UI - it's more for batch processing, e.g. in
  conjunction with --next_after_snapshot to produce snapshots of
  single facets in 'warped' form, e.g to be stitched with a stitcher.
  Note that a facet map display started with --solo will still read
  all facets from disk. Switching from one facet to the next is therefore
  done instantly, because the data are in memory already.
  This is also handy when displaying exposure brackets with
  --blending=hdr: soloing facets in hdr blending mode will show the
  contributing exposures in the brightness which is blended into the
  hdr-merged output and gives an idea of each facet's contribution.
  Just to make this quite clear: Soloing facets shows their content
  *with facet_brightness already applied*. If the input is an exposure
  bracket, the shot with the longest exposure will likely look
  totally blown. This does *not* mean that HDR blending will actually
  use the blown pixels - the blending function will only pick content
  which is suitable. If the input is an exposure bracket in linear RGB
  (which is preferable), each 'solo' image should look the same, except
  for overexposed pixels showing some shade of grey or white and for
  underexposed pixels, which are subject to more noise and banding in
  shorter exposures. By comparing the 'solo' images to the blended result,
  you'll see how the blending function picks dark content from long
  exposures and bright content from short ones. As the same output
  brightness is used for solo display, you can even zoom into, say,
  a dark area, brighten it to see what's there and then 'solo through'
  the contributing facets to see what would come out if you only had
  the one facet reather than a full bracket.
  Because 'soloing' does not modify any parameters, the displayed
  region remains the same and you can easily evaluate each partial
  image's contribution. You can even just 'throw together' a few images
  as a facet map and quickly alternate between them, to, e.g., evaluate
  details and pick the best image from a series. A quick way is to use
  hugin's PTO generator, which produces a PTO from any number of images,
  without automatically registering the images.
  When 'soloing through' a panorama, you may want to set the target
  projection and output HFOV to values which display the entire panorama:
  If the solo facet is outside the current view, lux will only show black
  and you may 'get lost'.

There is one more argument in the context of facet maps: 'fully_covered'. This takes some explaining. The default is 'fully_covered=no', which will work for all cases, and is safe. Using 'fully_covered=yes' is a bit faster, but it's only usable for certain specific image sets. Picture yourself inside the pano sphere. Every facet in a multi-facet display is placed on a plane touching the pano sphere. If you have enough planes in the right places, you'll find yourself in a polyhedron, and each of the polyhedron's faces is one facet. 'fully covered' means that every facet image will fully cover the facet it's projected onto, and that there is content enough for the whole 360X180 degree view. So it's not just the latter requirement: if you have a facet image which does not cover the facet it is projected onto, the 'whole show' can't be 'fully covered'. Typically, 'fully covered' facet maps are a result of careful design. A valid cubemap, for example, can be used to produce a valid 'fully covered' facet map by simply rewriting the specification:

projection=facet_map
fully_covered=yes

facet=front.jpg
facet_hfov=90.5
facet_yaw=0

facet=right.jpg
facet_yaw=90

etc.

If you have a registered image set for a full spherical which has complete content (the whole panosphere is covered by image data) you also have a case for fully_covered=yes.

Stripe panoramas, on the other hand, are obviously not 'fully covered', because they have parts where there is no content. You can view them with 'fully_covered=yes' as long as you dont stray into the 'uncovered' area - if you do, lux will crash. Cases which have 360X180 content but are not fully covered, are harder to construe. So the short rule for facet maps is: if in doubt or if it crashes, use fully_covered=no - this is also the default if you don't specify anything.

The geometrical model for facet maps is that all facets touch the pano sphere with their central point. The 'edges' of the facets, as seen in the viewer if your images don't 'fit' properly, are not specified explicitly. If your facet images have transparent bits, you will see other facets 'shining through', if they have non-transparent pixels 'in the right place'. And, of course, if you're doing image fusion, the images will all contribute in some way, according to their intrinsic quality.

Apropos geometry: the edges which arise 'automatically' when lux shows a facet map coincide roughly with the edges of a single 3D voronoi cell with all neighbours equidistant. And projecting this voronoi cell's surface onto the inscribed sphere yields a spherical voronoi diagram. The relationship with voronoi diagrams results from using the distance-to-center as ranking criterion, which is equivalent to the voronoi cell criterion of forming regions where no point is closer than a specific one. The actual ranking lux uses is a bit more complex: it also takes into account the partial images' field of view and gives lower rank to pixels which are 'very close' to the image margin.

Patterns of facets can be pretty as well and have a visual appeal in their own right. Just to give you some inspiration: try the other platonic bodies beyond the cube, like the dodecahedron. And you may want to play with using the same facet many times, or mixing facets which have nothing to do with each other...

When making facet maps from unmodified images straight from a camera, the seams will usually be visible, if there is no feathering or other seam processing to 'blend' the images. Not using feathering is okay when facet maps are used to display sets of facets which have been created artificially to serve as the source of a multi-facet display, in the same way that the cube faces of a cubemap are created just for this purpose. But facet maps are also meant as a way to do simple stitches from registered image sets, and for such facets, the seams are often visible because the image regisration is not perfect due to parallax, scene change or exposure variations. One way to make such flaws less visible is feathering: blending the facets together where they meet. The facet map semantics automatically locate the seams, there is no choice as there is in other image blending schemes. But the feathering is quite possible with a few more cycles: Near the line where two facets 'meet', their RGB values are mixed, so that right at the point where they meet both facets contribute equally. Since lux has 'snap-to-stitch', feathering has become less useful: you see the seams in animated sequences, but after the viewer has been at rest for a little while, the properly blended image 'kicks in' and the seams disappear. But the feathering code is still there for now.

To get feathering (which is off by default), pass --feathering=XXX on the command line. This only has an effect if blending is 'ranked' - not 'hdr', where there are no seams. The larger the value you pass, the wider the feathering region. Start out with values like 5. Of course, feathering comes at a cost: it requires extra cycles and will slow down rendering. This is especially noticeable when feathering facets with alpha processing enabled, because the calculations needed to take into account both alpha blending and feathering at the same time are quite involved. 'Proper' stitching with lux' version of the Burt&Adelson image splining algorithm does a much better job than feathering, but it takes long to compute. So feathering is more for 'live' displays, but it's probably better to create a blended rendition of the image set and then view the result. The feathering code is from a time when lux could not yet do 'proper' stitching.

Note that when using 'proper' stitching, you can still use feathering, which may help if the image splining code does not suffice to make the stitch 'smooth' at the facet borders.

The advantage of using a facet map to look at a set of registered images is obvious: you can look at it straight away with lux, without having to 'stitch' the images into a single combined panorama. The disadvantage is that facet maps require (much) more memory than a single stitched panorama. When you're running into memory problems with large image sets, you may want to disable production of elaborate image pyramids and interpolators by passing --build_pyramids=no. For mere viewing, this is usually quite good enough. If you still run out of memory, you'll have to sacrifice resolution, e.g. by using --squash=1.

I have added a python script 'pto2pv.py' to the repo, plus my old PTO parser module, 'parse_pto.py'. With this python code, you can 'translate' a PTO file to a facet map ini file, please see the comments in pto2pv.py. It's as easy as

python3 pto2pv.py input.pto > output.ini
lux output.ini

Your mileage will, of course, vary - if the registration in the pto file is good, the result gives a good idea of how the stitched version will look. I haven't touched the script in a while, it may not work correctly.

Keep in mind that lux' facet maps know nothing of some 'advanced' PTO features like translation parameters - you'll get the best results with image sets which are only optimized for hfov, position, lens correction and brightness. lux can now process a fair subset of PTO format, and some PTO features don't have a syntactic equivalent in lux ini file syntax: with a lux ini file, you can't specify masks, stacks and lens cropping. So my python script to translate PTO to lux ini files can't be used for PTO files with such 'advanced' features.

lux knows several 'blending modes', and there's a heuristic method to figure out which mode is appropriate for a given facet map: if the image positions differ significantly, lux assumes it's a panorama (blending=ranked). If the image positions are very similar, lux assumes it's an image set for HDR blending (blending=hdr) unless the Ev values are very close, in which case it's taken as an image set for deghosting (blending=quorate). Most of the time the heuristic will pick the correct blending mode, but you can always override it by specifying the mode explicitly.

When in 'ranked' blending mode, if several facets provide content for a given location, lux applies 'ranking', which determines which pixels are considered 'in front' of others. Without further specifications, ranking is (roughly) by a pixel's distance from it's facet's center. To use other ranking schemes, I have added the notion of a facet's handicap: this is a fixed value added to the distance, artificially putting facets 'further back'. There are several handicap schemes. The default is to look at the facet's hfov and prioritize facets with small hfov, to put tele shots 'in front of' wide angle shots. Another scheme prioritizes the facets by number - facets with low numbers will appear 'in front of' facets with higher numbers. Finally, handicaps can be assigned explicity by passing facet_handicap values. The ranking mode is passed via 'facet_priority', which can take the values "none" , "explicit" , "hfov" or "order". using facet_priority=none uses ranking strictly by distance from facet center - the 'handicap' is zero in this case. Note that assigning handicaps will prevent feathering between facets of significantly different handicap. These settings are expermental and may not work as expected; the default usually does a good job. When 'looking at' a PTO file, you can add additional arguments like the prioritization mode on the command line.

Initially, all blending in lux was done 'ranked': each pixel on every facet is assigned a ranking value, and pixels with smaller rank are considered to be 'in front of' other pixels. If these pixels are opaque, they occlude other pixels, and if they are transparent, alpha blending is done so that other pixels 'shine through'. Handling transparency already produced situations where pixels from several facets would be 'mixed' to create output pixels, and this mixing of several source pixels is taken further with the other two blending modes: 'hdr' and 'quorate'. These two modes don't assign geometry-derived rank at all, but instead look at the source pixels' 'quality' according to some quality criterion and assign weights which depend on the quality. For typical HDR blending, for example, a pixel is considered 'good' if it's intensity value is neither very high nor very low; this quality measure is also known as 'well-exposedness'. This mode gives zero weight to overexposed pixels and to very dark ones. If there are several source images with different Ev values, the 'best' pixels will be those from exposures which put them into the middle of the range. The 'quality' measure is used as a weight in a weighted summation of the contributing source pixels, which may also be brightened or darkened (for HDR blending) or left as they are (for exposure fusion). 'quorate' blending needs at least three partial images and favours pixels which agree with each other, while suppressing 'outliers'.

Note that for correct HDR merging, you should do the internal processing in linear light: use --process-linear=yes, which is also the default. You can HDR-merge images with alpha channel: pixels with smaller alpha values will be weighted less than pixels with higher alpha values, and the final pixel will have the same alpha value as the contributing input pixel with the highest alpha value. I think this is a pragmatic solution, but it's debatable. I don't always get good results from HDR-blending sets of JPEGs, I found the best results come from using linear input data, like linear 16bit TIFF files created from RAW shots. The poor results with JPEGs are probably due to lux' ignorance of the camera response curve (it merely uses an sRGB->RGB conversion). Note that to correctly process linear input, you must pass facet_is_linear=yes. When using PTO files to provide the registered set of images, you need to pass this argument on the command line:

lux --facet_is_linear=yes --blending=hdr some.pto

The single facet_is_linear argument will be taken over for all facets; alternatively you can specify --is_linear=yes which will also be taken over for all afcets.

'live' HDR merging in lux produces passable results and allows for evaluation of the viability of a registered image set. Together with 'soloing' (just press Shift+Right/Shift+Left) it's a good way to evaluate the fit and likely outcome of an HDR merge. But - at least for now - there are not many parameters to influence the outcome, and dedicated HDR blending software will often do better than lux. If you've established that a set of images will likely make a good HDR image, you may be better off doing the merge in specialized software rather than in lux - but giving lux' HDR merging code a try is cheap and you may well be happy with the output. Especially when working from bracketed JPEGs I advise to recalculate the light values (press Shift+L) before doing the snapshot. Keep in mind you want to set --snapshot_extension=exr to get openEXR HDR output. If you output to other formats, you'll still have the benefits of noise removal and better-quality shadows, but you'll lose overly bright parts due to the saturation arithmetics. openEXR output will capture the whole brightness range present in the input image set.

From version 1.1.0, lux supports source image cropping and exclude masks. I decided to support this PTO feature in lux, because it's a quick and convenient way to retouch unwanted content, but it wasn't easy to get it right, and it only works correctly with the new associated-alpha code which will be merged into master for the 1.1.0 release. Source image cropping, aka lens cropping, allows to assign a circular or rectangular mask to an imge, which is meant to exclude parts of the image which don't show image content - like the dark area around a circular fisheye image. Without this feature, lux could not display circular fisheye facets adequately.

From version 1.1.1, lux will support all types of PTO masking and also panoramas from stacks, where the stacks are exposure-fused before being stitched.

When processing facet maps, the brightness values for the individual facets may not be correct. lux can determine 'good' facet brightness values by gathering pixels from all contributing facets which aren't 'too bright' in any of the facets and using those pixels to produce new brightness values. When using lux interactively, this can be done by pressing 'Shift+L', which will adjust facet brightness via an override argument - so the change is lost after tabbing to the next image (set). Shift+L triggers a 'show-again' cycle, meaning that the same facet map is displayed again after 'taking in' override arguments, which is how the modified facet brightness values are introduced.

If you want the light balance to be done straight away, you can tell lux on the command line:

--light_balance={auto|by_ev|hedged}

  When passed 'auto', the brightness of the facets will be compared
  where they overlap and lux will try and find per-facet brightness
  values which are 'balanced', meaning that they minimize brightness
  differences between the facets. This is similar to 'photometric
  optimization' in panotools. While viewing images, the same effect
  can be produced with 'Shift+L'.
  The default, 'by_ev', simply uses the Ev values given in the PTO
  file. Passing 'hedged' will make lux override all Ev values and
  set them to 1.0.

--light_balance=auto is good for batch processing. Let's assume you have PTO files for brackets in 'bracket1.pto' and 'bracket2.pto'. If you want to make source-sized snapshots with automatic brightness values, you'd call lux like this:

lux --light_balance=auto --snapshot_like_source=yes
--next_after_snapshot=yes --snapshot_extension=exr --blending=hdr bracket?.pto

Once the sequence terminates, you'll have two exr files with the HDR-blended brackets. Automatic brightness adjustment is especially useful for brackets which are not in linear light. The images in such series - often JPEG images - may not be in 'standard' sRGB, and the Eev values in the PTO file may not refer to the linear RGB equivalent of the images, which is what lux expects. Automatic brightness adjustment is, at least, an informed guess, and oftentimes differes a fair bit from the Ev values the camera records in the image's metadata.

One more thing about 'live stitching': When using 'proper' stitching software, you make decisions about the output, which are manifest and fixed in the output image. If you want a wide dynamic range, you have to stitch to an HDR format like openEXR, and if you combine images with different resolution, you have to find a common resolution to be used for the output. 'live stitching' frees you from these constraints. If the viewer displays a section of an image which has high resolution, you can zoom in and actually see the details up to the level of what's provided in the source facet, while doing a stitch at this resolution may well exceed your system's capacity. So here is a handy way to 'fill in' high-resolution content where it makes sense: the interesting far-away range of high peaks can be provided by a few tele shots, while the 'boring' - and huge - sky can be dealt with by low-res fisheye shots, because noone will ever want to zoom into it. Facets in different brightness can be viewed dimmed or brightened, so you can explore dark shadows or the structure of bright clouds if your source images provide such content, which requires an HDR stitch or exposure fusion otherwise.

It's my conviction that 'live stitching' is the future and will eventually become more common than producing 'fixed' stitched output. For now, the computatational load for live stitching is often too large to compute animated sequences smoothly, but it's just a matter of time until this will no longer be an issue. With lux' new capability to directly read PTO files, it's easy to simply try out how well it works for a given image set.


Exposure Fusion and Image Stitching

I have added code to lux which can do exposure fusion and image stitching jobs in the background, using the technique published by Peter J. Burt and Edward H. Adelson in their article 'A Multiresolution Spline With Application to Image Mosaics', which was used for exposure fusion by Tom Mertens, Jan Kautz and Frank Van Reeth, as described in their article 'Exposure Fusion'. I moved the code into a separate source (pv_combine.cc) for quicker turn-around, and I am still in the process of moving bits of code around and tweaking my modified version of the algorithm, but now I seem to have arrived at a point when I can do three things properly:

The calculation, using several image pyramids, is computationally expensive (especially when producing large size output, view-sized fusions are quite quick) and it's done in the background by a dedicated (set of) thread(s), just like other snapshots, and there is no immediate 'live' view of the fused stack or the stitched panorama - the 'final' view is only calculated if the viewer is at rest, and it may take a while to show, due to the long calculations required to produce it.

If you do snapshots of the current view with lux (by pressing 'E') and the viewer shows an exposure fusion or a panorama, the output will be rendered as such and should look like the view on-screen.

Instead of using --next_after_snapshot=yes (which is for snapshotting 'live' views), to batch exposure fusion jobs, pass --next_after_fusion=yes. You may combine this with --light_balance=auto to do a light balance before the snapshot, just as it's done for HDR blending. For batching stitching jobs, use --next_after_stitch=yes

There are now several command line arguments which combine several settings into one to make stitching/fusing to the specifications in a PTO file more convenient. the output is created in the shape and projection given by the PTO file's p-line, lux forwards to the next image after the output is ready (or terminates if there are no more images):

--stitch=yes interpret PTO file as panorama and stitch it
--fuse=yes interpret PTO file as exposure bracket and fuse it
--focus_stack=yes interpret PTO file as focus stack and fuse it
--deghost=yes interpret PTO file as serial image and deghost it
--hdr_merge=yes interpret PTO file as exposure bracket and hdr-merge it

For faux brackets, this argument produces a 'standard' faux bracket with Ev values -2, 0 and 2 - either from an HDR image or from a bracket. For single image input, the output will have the source image's shape:

--compress=yes produce a 'faux bracket'

Other snapshot-related parameters are honoured: you can determine the file type of the output with --snapshot_extension, it's magnification with --snapshot_magnification. Passing --snapshot_like_source=yes on the command line will have the same effect as the 'Shift' in 'Shift+E' and render output as specified by the p-line of the PTO file, or, if you pass snapshot_facet, one of the facets. The output will use the same white balance, overall brightness, black and white point as the current view, so you can tweak these values with the 'live view' and render fusions once you think you have them right. Exposure-fused output will have a 'fused' infix, and be otherwise named like other snapshots, stitches will have a 'pano' infix.

Exposure fusion has a few extra parameters which influence the outcome. First there are the two parameters --exposure_mu and --exposure_sigma, which default to the values given by Burt and Adelson (.5 and .2). They pertain to weighting by 'well-exposedness', please refer to the original articles for now - or to the manual of 'enfuse' which uses the same parameters. Next there are the two parameters --exposure_weight and --contrast_weight. So far I have only implemented these two - the first one is for 'well-exposedness' and the second for local contrast. These weights are used additively, not in the power function proposed in the literature - I stick with enfuse's modus operandi in this respect. You can pass any value - the result is normalized. The default here is to only use well-exposedness, so --exposure_weight=1.0 and --contrast_weight=0.0 - contrast weighting in lux is done by looking at the gradient magnitude of a b-spline over the image data, which is quite different from the 'standard' approach. Use with caution, and please note that the magnitudes of these two quality criteria may be quite different - usually the contrast gleaned from looking at the derivatives will be (much) smaller in magnitude, so specifying equal exposure and contrast weights will look like exposure weights only, because the contrast weight's contribution is not noticeable. When combining both weighting schemes, you'll have to pass (much) larger contrast weight than exposure weight values.

Both exposure fusion and image splining (explained further down) have one more parameter affecting the outcome: --exposure_pyramid_floor. This is a numerical value which gives the minimum size of an image pyramid level. the default is 16, which is a tad more 'local' than the minimum of one, and slightly higher values are often good as well - but too high values make the response too 'local' and produce seam-like artifacts. The smallest possible value, 1, produces a very uniform result with no 'locality', and it often lacks 'vividness' which 'more local' results show. The default of 16 is an attempt at a compromise which provides pleasing results in most situations, but it's up to you to decide, and it may be well worth your while to try out different values for this parameter.

Exposure fusion and HDR blending both rely on 'well-exposedness' of source pixels as a quality measure, which in turn determines the weight each contributing source pixel receives in the weighted summation. The image splining code applies these weights in a 'multilevel' fashion, working on separated frequency bands, to avoid artifacts which result from the 'naive' approach. When I implemented exposure fusion, I noticed that the exposure fusion code can be used to create HDR-merged output if the source pixels are weighted as they would be for an exposure fusion, but brightened/darkened as they would be for an HDR merge. And, even better, it's possible to vary smoothly from one effect to the other, with a 'normal' exposure fusion at one end of the scale and an HDR blend at the other. lux exploits this with the 'hdr_spread' parameter. The default for this parameter is 1.0, and results in a 'normal' exposure fusion. Setting it to 0.0 will instead produce an HDR merge with multilevel blending, which differs from lux' 'normal' HDR merge, which is done per-pixel. Passing values in the range of zero to one will produce a 'hybrid' result, where dynamic range compression increases from 1.0 towards 0.0. This may be a good way of compressing the dynamic range somewhat (e.g. to fit into an HDR-capable display's dynamic range) but less than it would be with a 'standard' exposure fusion, which compresses the dynamic range to the same range as the source images', and more than with a 'normal' HDR merge, which may produce a dynamic range which can't be displayed on a given monitor. Note that to actually view images with extended dynamic range on an HDR-capable display, you'll have to use a different viewer: as of this writing, lux will only display in sRGB.

There is a variant on exposure fusion called 'faux bracketing'. This variant starts out with HDR input, like an openEXR file, but it can also start out with a bracket, like the 'proper' exposure fusion above, which lux will blend to HDR internally before doing the 'faux bracketing'. The algorithm proceeds to create a 'faux bracket' by producing several images from the input which vary in brightness, an then calculates an exposure fusion from them. For HDR input, this provides a way of compressing the dynamic range in the same way that 'true' exposure fusion does, but it still retains an extended dynamic range. If output to limited-range formats, the effect is very similar to a 'true' exposure fusion. faux brackets are rendered if you pass --faux_bracket=yes to lux. Using only this parameter will require working with a bracket, and it will use the brightness values of the input images for the partial images made from the internally-produced HDR image. You may also use - at least two - parameters --faux_bracket_ev=... Which give Ev values for the production of the component images passed to the exposure fusion - at least two because only one would not make sense in an image fusion. Like all vector data in lux, you just pass as many as you intend the vector to contain. A typical faux bracket would be done like this:

lux --faux_bracket=yes \
    --faux_bracket_ev=-2 --faux_bracket_ev=0 --faux_bracket_ev=2 \
    my_image.exr

Or - using the simplified 'compress' argument:

lux --compress=yes my_image.exr

Batching the job works the same way for 'true' and 'faux' mode, use --next_after_fusion=yes to have the fusion job triggered and proceed to the next image (if any). Even just two Ev stops can be used for interesting effects like pulling a bright sky down or adding fill light.

Image stitching with the Burt and Adelson image splining technique uses essentially the same algorithm as the one used for exposure fusion, but quite different masks: where exposure fusion masks by 'well-exposedness', here lux uses spatial masks, defining which parts of the partial images should be 'on' or 'off'. Here, lux uses masks generated by a 'special' rendering job which does not use the partial image's interpolator, but rather one yielding only 1.0 - and the other partial interpolators yielding 0.0.

To fine-tune the image pyramids for the modified Burt&Adelson image splining algorithm, you can pass a set of parameters to lux which will set the spline degree of the splines used as pyramid levels, and the decimators used to create smaller pyramid levels from larger ones:

bls_i_spline_degree and bls_q_spline_degree set the spline degree for the image pyramids and the weighting pyramids, and bls_i_spline_decimator and bls_q_spline_decimator function like 'pyramid_smoothing_level', so positive values choose a spline reconstruction kernel of that degree, -1 is for 'area decimation' and -2 uses a 'convolving basis function' which is similar to 'ordinary' convolution with a small binomial kernel (.25, .5, .25). -3 uses a higher binomial kernel ( 1/16 * ( 1, 4, 6, 4, 1) ), -4 an 'optimal' Burt filter, and -(N*4-1) a N*4-1-tap half-band filter. The defaults are to use degree-1 splines (bilinear interpolation) together with area decimation. This combination is fast and the results are appealing, but you may want to experiment with different settings. Some combinations don't produce good results, so your mileage will vary. A good alternative is to combine cubic splines (degree 3) with a convolving basis functor for both splines, but note that this will increase rendering times noticeably. Combining cubic or higher splines with half-band filters should be a near-optimal solution, but this will also take the longest processing time. It may be a good choice for work where the very best quality is required.

For stitching jobs, when you have the view you want your output to cover, press 'E' to capture the view showed by the viewer, or 'Shift+E' to produce a stitch as specified in the PTO file's p-line. The stitch for such snapshots will be done in the background, like other snapshots, and the same snapshot-specific paramters apply. To batch stitching jobs, use --next_after_stitch=yes, and you can also use --light_balance=auto for batch stitching, if your light balance is off - and for working 'live', remember to use 'Shift+L' for light balancing. For stitching jobs, you often want a generous snapshot_magnification for the 'final' stitch after your trials at screen resolution come out nice, if you don't go with the specs in the PTO.

Again, the current view's brightness, white balance etc. are taken over, so you have a lot of tweaking opportunities. Also remember that most panorama stitches will best be done to other target projections than the default rectilinear, so you may want to pass, for example, --target_projection=spherical. In this context, be reminded of pressing 'L' whe you lose the horizon - 'L' will make the view 'level' again. Note that lux now does output cropping, if that is specificed in the PTO file's p-line. lux-generated output has lux-specific metadata, so you can load, like, a full spherical output straight into lux and expect the image to be projected adequately. For 360 degree work, you must have a view hfov of precisely 360 degrees, which is best done by specifying it on the command line (-hfov_view=360), after which you must not zoom. For full sphericals, you also have to have vfov of precisely 180 degrees, and to get that right, it's best to work in a window with precisely 2:1 aspect ratio, so for a full spherical, you'd call lux with something like --fullscreen=no --hfov_view=360 --window_width=1000 --window_height=500 --snapshot_magnification=5.

Per default, when launched by pressing 'E' in an interactive session, rendering the synoptic image will be done with a few threads in the background. This may be undesirable - you may want to have the images rendered more slowly. Use --snapshot_threads=... for the purpose, to fix the number of dedicated threads for the job at hand - the default (0) means to use 'as many threads as the machine has physical cores'. The 'artificial slowdown' is to keep lux responsive in an interactive session, assuming you want to carry on looking at the image. Note that you can't move on to another image while there are still stitches etc. going on in the background. When in batch mode (--next_after...) lux always uses as many threads as there are physical cores.

Per default, the image splining/exposure fusion code will now also be used for the display of facet maps when the viewer is at rest. This behaviour is controlled by two command line parameters: snap_to_stitch and snap_to_fusion. If they are set to 'yes', the viewer will, when at rest, render a single frame with 'proper' image splining/exposure fusion and display that once it's ready. This may introduce sudden changes in image content, which can be annoying, so the behaviour can be switched off. The change is usually more noticeable for exposure fusions, because the fusion code favours well-exposed content, while the live HDR blending tends towards the average of the images, so switching off only snap-to-fuse is a sensible idea.

With the new hdr_spread paramater, there is now an easy way to render HDR output from panoramas with stacks. The stacks are exposure-fused before they are blended into a panorama, but when you pass --hdr_spread=1, the stacks are fused to HDR images without dynamic range compression. So the final panorama has the entire dynamic range - you do need openEXR output, of course. So stitching an HDR panorama from stacked images can be done like this:

lux --stitch=yes --hdr_spread=1 --snapshot_extension=exr my_pano.pto

One last remark about openEXR output: when producing openEXR output, lux writes an additional lux file with suitable metadata. If the input is from a PTO file with cropping, this will be reflected in the ini file, and feeding the ini file as input to lux will produce a view using the correct projection, field of view and cropping. This makes working with openEXR data more convenient.


Benchmarking

You may want to know how fast lux is - either to get an idea about the amounts of resources your system needs to run lux with varying tasks, or to compare different systems running lux on the same data with the same settings, to compare system performance. There are a few things to keep in mind:

Here's an example:

time lux -f3 -m2 -A1 -u -z1000 -ps -h360 pano.tif

This command runs a full speed 1000 image pan with a cubic spline interpolator on the 360 degree spherical image 'pano.tif' and prints the time it took.


Automatic Rendering Time Management

This is a reasonably new feature in lux and relies on heuristics established on my system, but should work on other systems as well. To reiterate: lux can squash frame rendering times with the 'global scaling' mechanism: instead of rendering full-sized (window-sized) frames, it can render smaller frames and upscale them to the window size using the 'global scaling' magnification factor, where the upscaling is left to SFML/openGL and happens on the GPU with no noticeable impact on CPU time.

Obviously, rendering smaller frames takes less time, and upscaling reduces the quality of the image, making it look blurred. Small global scaling variations are less noticeable than larger ones, and of course the resolution of the display and your viewing distance also make a difference.

Global Scaling offers fine-grained control over rendering times. When adapting the frame rendering times automatically, we need such fine-grained control in order to be able to adapt the rendering time in sufficiently small steps. There are stumbling stones, though:

One is lux' use of image pyramids. A small change in global scaling can make lux switch to using a different pyramid level, which has a larger impact on rendering time than the change to global scaling would have while sticking to the previous level. This change of rendering time due to a pyramid level change might start the system pumping:

global scaling goes up, a smaller (higher) pyramid level is picked, processing time therefore goes down, resulting in global scaling being lowered again, which causes a fallback to the previous pyramid level...

To avoid this, lux fixes the direction of the automatic changes to the global scaling. It only lets the automatics either keep on raising or lowering it. Only when there is some user interaction, the state is set to indeterminate and the automatics can start either way when they kick in. This avoids pumping, but at times rendering can 'hang' in a suboptimal state. So at times, simply interacting with lux can bring it round from such a state.

The other stumbling stone is operation in pan mode. When pan mode processing is off, all frames are calculated from 'first principles', including rather time-consuming coordinate transformations yielding the coordinates into the source image which are needed for remapping. pan mode saves the transformed coordinates (into a 'warp array') and only applies a delta when using them. This saves a good amount of time.

But the warp array has to be the same size as the frame that is rendered, and if global scaling changes the frame size, the previously used warp array becomes invalid and the 'capturing' of a set of transformed coordinates into a new warp array has to be done before pan mode can proceed. lux does not count the time it takes to calculate a new warp array as 'frame rendering time', but it creates system load nevertheless, and may result in brief stuttering if the system can't cope. To avoid frequent changes of frame size, the change to global scaling is done one largish step at a time, leaving the system some time to recover. This makes the automatics less fine-grained than one might wish, but in my experience the changes to global scaling are still so small with every step that the differece doesn't really show.


Handling Multiple ISAs in one Binary

lux contains code for several ISAs in a single binary. This feat took me quite a while to figure out, so I'll explain here how it's done. If you look at the makefile, you can see that pv_rendering.cc is compiled several times with different parameters. Normally, when compiling a source twice, the resulting object files will contain code labeled with the same symbols. You can link in both objects, but only the code from the first object will be executed - or, worse, your linker may refuse to accept redefined symbols. lux circumvents the problem by simply putting all rendering code into a namespace that varies with the intended ISA. The use of these specific namespaces goes 'deep', so that the subroutines used by the rendering code are also all in ISA-specific namespaces. To vspline users: lux redifines 'vspline' to some architecture-specific name, effectively using a separate vspline 'incarnation' for each ISA. This results in separate symbol sets for each ISA-specific object file, and they can all be linked together without any ambiguities or symbol redefinitions.

To use this scheme effortlessly, lux employs a dispatching scheme: calls into the ISA- specific code are routed via a virtual base class which offers the rendering code's functions as virtual member functions. The ISA-specific code inherits from the virtual base class and overrides the virtual member functions. Now invocation of the rendering code is done via a pointer-to-base-class, which points to an object of the chosen derived class. Once the pointer is set, the use of a specific ISA is fixed, and the calling code can be totally unaware of the routing: it merely goes 'via the pointer to base'. If you look for 'dispatcher->' in pv_no_rendering.cc, you can find places where the non-rendering part of lux calls into such ISA-specific code.

I elaborated this mechanism to easily extend the set of ISAs which lux can handle. For the time being I do not offer code beyond AVX512f, but once I decide to do so, there is very little I have to do to enable it. I'd need:

Packaging all ISA variants in a single executable bloats the executable, but nowadays storage space is cheap, and having just one executable and no need for external dispatching code is IMHO well worth the (small) expense: Writing external dispatching code is typically hard to do in a multi-platform manner and may require shell scripts or auxilliary programs which in turn have to be put somewhere, which again varies from platform to platform. On the other hand, if the size of the executable were to become an issue, building executables for only one or two ISAs is simple: it only requires omitting the unwanted objects from linking and removing any dispatching code which refers to them. The dispatch code itself is rarely executed and not critical for performance.


Bits and Bobs

If only the horizontal field of view is given, lux assumes that the optical axis hits the image at it's center, and the vertical field of view is determined automatically. Alternatively the vertical field of view can also be passed in, via the -v parameter.

This will usually only be necessary if you are working with cropped images, where the optical axis is off-center. The optical calculations in lux make the assumption that the image's surface always touches the unit sphere where it meets the optical axis. The effect of cropping such an image can be parameterized by giving the angle from the 'back' pole to it's top margin and the angle from the 'back' pole to it's left margin. These two additional parameters can be passed in as -y and -x, respectively, in degrees. Let's say these values are Y and X.

With centered, uncropped images we have the relation

X = ( 360 - hfov ) / 2

Y = ( 360 - vfov ) / 2

If you open a cropped image and the horizon isn't in the middle of the screen, you can compensate by 'horizon raising' with the H/Shift+H keys when the projection and image extents support this.

A wobbly horizon can be dealt with using a 'bias': apply roll, pitch and yaw until you have a level view with the horizon in the middle, then press F10 to fix the new orientation for the currently viewed image. The change persists until F10 is pressed again or a new image is displayed, the image itself is not modified in any way.

Pressing F10 sets an 'orientation bias': all changes to the viewer's orientation are now done relative to this orientation. The viewer will behave as if the image had been saved with the fixed center point and reloaded. Note that if you set the bias on a point which is not on the true horizon, or while the view is not level, you'll most likely make matters worse. While viewing images in 'mosaic' projection, F10 will merely record the current center and rotation of the view, and pressing 'Return' will return to these values, instead of returning to the initial values given on the comman line (if any).

To deliver an appealing image without aliasing with scaled-down display, lux uses an image pyramid. This pyramid is roughly as strongly low-pass filtered as a Burt filtered pyramid. By default, I use degree 7 b-spline reconstruction on the raw data, which produces mild blur, which results, in my opinion, in a convincing scaled-down view. This is configurable (-F command line parameter), as is the scaling step from one pyramid level to the next (-S command line parameter, pass a real value, default is 2.0)

The image pyramid makes it possible to display even large panoramas smoothly and without aliasing when zoomed far out when working on the base data would produce a detrimental memory access pattern. This is easy to comprehend: If the image is very large and the view covers a large portion of it, individual points in the view will correspond with image points which are far apart, and each of these points has to be accessed in memory. Looking at a scaled-down view, these points are closer together in memory, which makes it easier to access them quickly.

These individual points, when picked from the 'base level', would display whatever would be visible at that point with the base level's resolution, which may be tiny details which shouldn't be visible at all when zoomed out - this is 'aliasing', and it would show in the display as high-frequency noise - visible as 'glittering'.

So using the image pyramid is doubly beneficial, as it avoids aliasing and lowers resource use.

Since lux keeps it's interpolators fully in memory at all times, from a certain panorama size onward, when the system starts swapping, smooth operation will become difficult. Keep an eye on the system monitor and avoid exceeding physical memory. Recent additions to the code have made startup faster: As soon as the raw image data have been loaded from disk, the first frames are rendered and displayed. This is done using a preliminary interpolator, which uses bilinear interpolation, so the still image won't be what is specified with -q (or, the default, a cubic spline). Next, the 'proper' interpolators are built by a background thread and will be used once they are ready. The 'proper' interpolators need a lot of memory, and if there isn't enough, lux will stick with the preliminary interpolator, which only needs as much memory as the image occupies on disk. When viewing very large images, you may choose to omit building the 'proper' interpolators by passing -s. The production of the interpolators in the background takes processing power, so initially lux may not run as smoothly as later on when all is fully set up.

My system is an Intel(R) Core(TM) i5-4570 4 core CPU running at 3.20GHz, and I get smooth operation with most panoramas and situations. This CPU has AVX2 vector units. It's neither top-notch nor very new, but still does the job well. Less powerful CPUs may not be up to serving large displays smoothly, and even with global scaling, which can significantly squash processing times (at an expense), while the warp engine can supply the frames in time there may still be stutter. I found that some window managers are problematic in this regard, for example, the default window manager (kwin) coming with kubuntu sometimes gives me stutter with windowed display, not with full-screen, and using compiz makes the windowed display smooth, but compiz doesn't run so well with KDE on my machine. Using gnome all seems well without ado. It's surprising that windowed display should be more problematic than full-screen - after all the frame calculation times are lower, since the frames are smaller - but this is what I find happens, and I haven't been able yet to figure out why. I suspect it's because the windowing has to place the view among other data whereas a full-screen display can just write to the whole frame buffer without having to share it with other content. You should definitely use the no-tearing option in your display settings.

There are a few situations which are hard to compute quickly, notably a strongly tilted display and views near the zenith/nadir. This is due to detrimental memory access patterns in these situations and can't be helped. With these, global scaling will squash processing times, so if you notice that the image blurs a bit when jumping to look at the nadir (with -g on), this is the reason. Also, displaying areas near the 'back' pole of a fisheye panorama is computationally expensive, because the source points need to be gathered from a ring near the image's edge with little regularity.

lux will honour a panorama's alpha channel. Panoramas often have an alpha channel which is uniformly opaque. The default is to check the alpha channel, if it is present, and behave as if it wasn't there if it is found to be fully opaque. Note that even a little bit of transparency in a single pixel will be detected and turn alpha processing on. You may override the default behaviour with several flags, check the invocation section (-c, -C, -a)


Background

Why yet another panorama viewer? I'm a panorama photographer myself, and I've always wanted to have my own software to display my panoramas. While there is a lot of software around to stitch panoramas, I didn't find a (free) viewer to my liking. So for a long time I made do with what there is, until... well, here's the background.

I started doing image processing back in the eighties. I had use of a 512X512 video frame grabber card (a Matrox PIP-EZ) and a system with a 286 processor. Doing simple stuff with the images - even in assembler - took a long time, and image quality was not very good. Eventually I lost interest in image processing (I did sounds instead) until I got my first digital camera (a Canon Ixus 30). I had always had the idea of stitching images together at the back of my mind, and soon after I got the camera, I found first autostitch and then hugin.

Doing sound processing, I had started out in an era when numeric sound processing was not a mass phenomenon. There were CDs, but a single CD's content was already difficult to handle with a computer of that time, and you had to have sound cards to hear more than the built-in speaker's squeaks. The sound cards were hard to program, and producing sound in real time was a difficult task. Compare that to now: on-board sound is good enough for pretty much everything (unless you do production), and the computational load it produces is not really an issue any more. So how about graphics? By now, chip set graphics have become quite powerful, and dedicated GPUs can do amazing things. But even most 'normal' CPUs these days are surprisingly powerful; in fact they are so powerful that a lot of graphical stuff can be done with the CPU and does not need a GPU. I think graphics will go the same way as sounds: they will eventually be handled by the CPU, and few people will need anything beyond what the CPU will be able to do. This will be a great relief: instead of having to wrestle with a bunch of incompatible vendors and the circuitous paths needed to get their dedicated hardware to do something useful, it will be a simple matter of programming the CPU in your favourite programming language to do the image processing. Such code has several advantages: it's easily understood, future-proof, portable, debuggable. And, by now, it's feasible. One reason I wrote lux is to show that this is in fact so. It does still need a good deal of tweaking, and it's not yet always running smoothly, but I'm confident I'm on the right path. We see this transition now with 2D imagery, which is now moving into the 'CPU feasibility window'. 3D, the next dimension to become CPU-feasible, is still some years away, but eventually it will go the same way. When it comes to media, what's the limiting factor is a human sensory system's capacity to process stimuli, and this is limited. As soon as CPU capacity can easily produce 'saturation' of a human's sensory system, there won't be need to use dedicated hardware. Of course this statement is simplistic, but you get my drift.

lux relies heavily on my own interpolation code using uniform b-splines, which uses multithreading and hardware vectorization. This code exploits what modern CPUs have to offer, and lux wouldn't be possible without recourse to these techniques. But the important point is that these two techniques - multithreading and vectorization - are where Moore's law is being fulfilled nowadays. There may be increase in clock rates, but it's very gradual by now. Memory access is getting faster, but only moderately so. The 'music' is coming from more cores and wider vector units. And this is happening now and will not stop anytime soon. So writing code which exploits these features is a good investment in the future: If users get more cores, lux will just use them and scale nicely with their number. New vector units? May be usable by simply recompiling, no need to wait for 'drivers' or an update of some 'graphics engine'. While writing such code takes a good deal of effort (a bit like programming sounds in the nineties) I am confident that my code will be 'good' for some time to come. And another decade down the line, what is now cutting edge may well be the most normal thing to do.