This manual was last updated 24 April 2011 for version 1.0.0 of Bino.
Copyright © 2011 Martin Lambers (marlam@marlam.de), Stefan Eilemann (eile@eyescale.ch), Frédéric Devernay (Frederic.Devernay@inrialpes.fr)
Copying and distribution of this file and the referenced image files, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. These files are offered as-is, without any warranty.
Table of Contents:
Bino is a 3D video player with multi-display support.
3D videos are more accurately called stereoscopic videos. Such videos have separate views for the left and right eye and thus allow depth perception through stereopsis.
All input files are combined into one media source, so you can have video, audio, and subtitle streams in separate files. The files are decoded with the FFmpeg libraries, so URLs and other special constructs are supported.
Bino supports a wide variety of stereoscopic input layouts and display techniques.
Synopsis:
bino [
option...] [
file...]
Bino supports the following stereoscopic input layouts:
The default is to autodetect the input type from meta data stored in the file. If this fails, Bino tries to autodetect the input type based on the file name. See File Name Conventions. If that fails, too, Bino guesses based on the resolution of the input.
Bino supports the following stereoscopic display techniques:
For stereo input, the default is stereo if the display supports it, otherwise red-cyan-dubois. The default for mono input is mono-left.
If the meta data stored in a file does not indicate its stereoscopic layout, Bino tries to guess it by looking at the last part of the file name before the file name extension (.ext).
The following file name forms are recognized:
Bino reacts on a number of keyboard shortcuts during playback.
Make sure that the video area has the keyboard focus before using these shortcuts, e.g. by clicking into it.
The following shortcuts are recognized:
Many stereoscopic display devices suffer from crosstalk between the left and right view. This results in ghosting artifacts that can degrade the viewing quality, depending on the video content.
Bino can optionally reduce the ghosting artifacts. For this, it needs two know
Please note that ghostbusting does not work with anaglyph glasses.
To measure the display crosstalk, do the following:
gamma-pattern-tb.png
image
and correct the display gamma settings according to the included instructions.
You need to have correct gamma settings before measuring crosstalk.
crosstalk-pattern-tb.png
image and
determine the crosstalk levels using the included instructions.
You now have three crosstalk values for the red, green, and blue channels. You can now tell Bino about this using the --crosstalk option. For example, if you have measured 8% of crosstalk for red, 12% for green, and 10% for blue, use
$ bino --crosstalk 0.08,0.12,0.10
Once you know the crosstalk levels of your display device, you can set the amount of ghostbusting that Bino should apply using the --ghostbust option. This will vary depending on the content you want to watch. Movies with very dark scenes should be viewed with at least 50% ghostbusting (--ghostbust 0.5), whereas overall bright movies, where crosstalk is less disturbing, could be viewed with a lower level (e.g. --ghostbust 0.1).
To check if you crosstalk calibration is correct, display the crosstalk patterns with full ghostbusting, like this:
$ bino --crosstalk 0.08,0.12,0.10 --ghostbust 1.0 crosstalk-pattern-tb.png
The remaining crosstalk should optimally be 0%.
Bino supports multi-display output via the Equalizer framework.
This is how it works:
First, install Equalizer 0.9.3 (also known as 1.0-beta) or later. See http://www.equalizergraphics.com/. Verify that it works by running the included eqHello example.
Then, build Bino with Equalizer support by using the --with-equalizer configure option.
$ ./configure --with-equalizer
Now you need an Equalizer configuration file for your display setup.
Bino needs a two-dimensional Equalizer canvas (= combined screen area), subdivided into segments (= single display areas). For example, if you have two projectors that project onto a 2m x 1m screen side-by-side, then your canvas is 2m x 1m large, and you have two segments: the first segment fills the left half of the canvas, and the second segment fills the right half.
Next, Equalizer needs to know how to render into each segment. For this purpose, you define several hierarchical objects: nodes (= processes, possibly on different systems), pipes (= graphics cards), windows (= output windows with OpenGL contexts), and channels (= parts of windows). The video output happens at the channel level: each channel is assigned to one segment of the canvas. Most probably you just have one fullscreen window per pipe, and a single output channel per window.
Note that one node is special: the application node, which is the node that you initially start (the other nodes are started automatically by Equalizer). The application node is called 'appNode' in the Equalizer configuration, and Bino will play audio only on the application node. All video output is then synchronized to this audio output.
Once you have your configuration file (examples are given below), you can check if it works correctly using the eqHello example:
$ eqHello --eq-config configuration.eqc
Once you made sure that this works, you can start Bino using this command:
$ bino -o equalizer --eq-config configuration.eqc video.mp4
Note that all your nodes need access to the video file using the same name, so a shared filesystem is helpful if you use multiple systems.
To play live video from a webcam or TV card, you can set up a streaming server using ffserver (part of FFmpeg) or vlc, and then give the appropriate URL to Bino. You can use multicast to stream the video to multiple systems efficiently.
The output mode -o equalizer-3d allows to configure non-planar projections. Bino projects the video onto a virtual screen in 3D space. The screen is located in the distance of the biggest front-facing segment, and sized to fill the wall optimally. By configuring the output segments accordingly, various advanced display configurations can be used, e.g. displays rotated around the Z axis by an arbitrary angle or non-planar screens.
In this example, you have a 2m x 1m screen and two projectors: one for the left half of the screen, and one for the right half. The two projectors are connected to two graphics cards on the same system.
In this situation, you have one node with two pipes, and each pipe has a fullscreen window with a single output channel. The first output channel is assigned to the left segment, and the second output channel is assigned to the right channel. The resulting configuration looks like this:
server { config { appNode { pipe { device 0 window { attributes { hint_fullscreen ON } channel { name "left" }}} pipe { device 1 window { attributes { hint_fullscreen ON } channel { name "right" }}} } observer {} layout { view { observer 0 }} canvas { layout 0 wall { bottom_left [ 0.0 0.0 -1 ] bottom_right [ 2.0 0.0 -1 ] top_left [ 0.0 1.0 -1 ] } segment { channel "left" viewport [ 0.0 0.0 0.5 1.0 ] } segment { channel "right" viewport [ 0.5 0.0 0.5 1.0 ] } } compound { compound { channel ( view 0 segment 0 ) swapbarrier {} } compound { channel ( view 0 segment 1 ) swapbarrier {} } } } }
In the following example, you have a 4m x 3m screen for 3D projection via passive stereo (e.g. polarization). You have two systems, "render1" and "render2", each equipped with two graphics cards. The two cards on "render1" generate two images for the left half of the screen: one for the left eye view and one for the right eye view. The two cards on "render2" generate left and right view for the right half of the screen. Additionally, you have a system called "master" which has a sound card and should display a small control window.
This setup is very similar to the situation shown in
multi-display-vrlab.jpg
.
The configuration looks like this:
server { connection { hostname "master" } config { appNode { connection { hostname "master" } pipe { window { viewport [ 100 100 400 300 ] channel { name "control" }}} } node { connection { hostname "render1" } pipe { device 0 window { attributes { hint_fullscreen ON } channel { name "render1left" }}} pipe { device 1 window { attributes { hint_fullscreen ON } channel { name "render1right" }}} } node { connection { hostname "render2" } pipe { device 0 window { attributes { hint_fullscreen ON } channel { name "render2left" }}} pipe { device 1 window { attributes { hint_fullscreen ON } channel { name "render2right" }}} } observer {} layout { view { observer 0 }} canvas { layout 0 wall { bottom_left [ 0.0 0.0 -1 ] bottom_right [ 4.0 0.0 -1 ] top_left [ 0.0 3.0 -1 ] } segment { channel "render1left" viewport [ 0.0 0.0 0.5 1.0 ] } segment { channel "render1right" viewport [ 0.0 0.0 0.5 1.0 ] } segment { channel "render2left" viewport [ 0.5 0.0 0.5 1.0 ] } segment { channel "render2right" viewport [ 0.5 0.0 0.5 1.0 ] } segment { channel "control" viewport [ 0.0 0.0 1.0 1.0 ] } } compound { compound { eye [ LEFT ] channel ( view 0 segment 0 ) swapbarrier {} } compound { eye [ RIGHT ] channel ( view 0 segment 1 ) swapbarrier {} } compound { eye [ LEFT ] channel ( view 0 segment 2 ) swapbarrier {} } compound { eye [ RIGHT ] channel ( view 0 segment 3 ) swapbarrier {} } compound { channel ( view 0 segment 4 ) swapbarrier {} } } } }
The -o equalizer-3d mode allows to set up arbitrary-oriented screens using either the wall-based or projection-based 3D frustum descriptions.
In this example we set up two 16:10 displays side-by-side which have been
rotated around their Z axis by 1.3 degrees radians (~74 degrees). The image
multi-display-rotated.jpg
illustrates
this setup. Other setups include distortion-correct projection for curved
screens, or arbitrarily-placed screens in a 3D space.
First, we rotate a normally-aligned screen by 1.3 degrees and output the result:
eq::Matrix4f matrix(eq::Matrix4f::IDENTITY); matrix.rotate(1.3f, eq::Vector3f::FORWARD); wall.bottomLeft = matrix * wall.bottomLeft; wall.bottomRight = matrix * wall.bottomRight; wall.topLeft = matrix * wall.topLeft; std::cout << wall << std::endl;
yields a rotated screen centered on the origin:
bottom_left [ -0.69578 0.6371 -1 ] bottom_right [ -0.26778 -0.9046 -1 ] top_left [ 0.26778 0.9046 -1 ]
This screen has to be moved along the X-axis for the left and right screen by 0.5195m, which places the edges of the screen on the origin. The resulting wall descriptions are used for the left and right segment, as shown in the configuration below.
The configuration references two GPUs full-screen output. By changing the node resource section, the outputs may be mapped to two computers. When disabling the fullscreen mode and setting 'device 0' for the second pipe, two windows simulate this setup on a single machine.
global { EQ_WINDOW_IATTR_HINT_FULLSCREEN ON } server { config { appNode { pipe { device 0 window { viewport [ .215 .5 .4 .4 ] channel { name "channel1" } } } pipe { device 1 window { viewport [ .285 .1 .4 .4 ] attributes{ hint_drawable window } channel { name "channel2" } } } } layout { view{ }} canvas { layout 0 segment { channel "channel1" wall { bottom_left [ -1.21528 0.6371 -1 ] bottom_right [ -0.78728 -0.9046 -1 ] top_left [ -0.25172 0.9046 -1 ] } } segment { channel "channel2" wall { bottom_left [ -0.17628 0.6371 -1 ] bottom_right [ 0.25172 -0.9046 -1 ] top_left [ 0.78728 0.9046 -1 ] } } } compound { compound { channel ( segment 0 ) swapbarrier{} } compound { channel ( segment 1 ) swapbarrier{} } } } }