Saturday, December 31, 2011

time travel: oscillator slippage

The V2 parallel-processing rework of FFRend was a huge win overall, but it also introduced subtle bugs, some of which are taking a long time to be discovered. Another one turned up yesterday: oscillator slippage can occur due to time shift between plugins. In a pipeline, the further a given stage is from the output, the further into the future that stage is compared to the output. Any queuing between stages increases the difference even more.

Suppose a project contains two plugins, A and B, each containing one oscillator. A inputs to B, and B is connected to the output. The two oscillators have exactly the same frequency and appear to be in perfect sync. But if we stop FFRend in the debugger and examine the two oscillators, we'll find that they don't have the same value! A's oscillator is ahead by a small amount. This is because A is working ahead of B, i.e. A is operating on a frame that hasn't arrived at B yet. This unavoidable; it's how pipelines (or assembly lines) increase throughput. We need to careful when we say that A and B appear to be in sync. What we really mean is that A and B are in sync in the output! This the synchronization we care about. A and B should be out of sync when viewed from an external point of view (e.g. the debugger) because that's our proof that parallel processing is occurring.

However this necessary time shift means that complications can and do occur. For example, suppose we save the project. As currently coded, we're just copying the current values of oscillators to the project file. But the current values aren't all from the same time! In other words, the current value of A's parameter represents a state several frames ahead of B's state, which is several frames ahead of the output's state. Consequently, if we reopen the project, the oscillators won't be in sync in the output, despite the fact that they were in sync when the project was saved. The error occurs because we're violating an implicit assumption that all the values in the project file are from the same frame of reference.

To solve this problem, the save function needs to compensate the oscillator values for time shift. This should be doable because a) oscillators are periodic functions and can therefore be stepped in either direction with predictable results, and b) each plugin knows exactly how many frames ahead it is relative to the output state.

Note that this problem is unaffected by pause, i.e. the problem occurs regardless of whether the engine was paused when the project was saved. This is because pausing the engine doesn't reset the frame pipeline (unlike stopping the engine, which does); all the frames are frozen in mid-flight, so however far ahead a given plugin was when the engine was paused, it remains that way.

An additional, related problem was observed: inserting a plugin while the engine is running also causes oscillators to slip. Here the issue is even more subtle. The engine can only modify the plugin array while stopped, therefore inserting a plugin necessarily involves stopping and restarting the engine. However stopping the engine resets the pipeline, discarding any frames that were in progress but not yet output. Thus when the engine is restarted, all the plugins initially have the same frame of reference. But the oscillators still have the same states they had at the moment the engine stopped, when the plugins all had different frames of reference! Consequently there's now a mismatch between a plugin's oscillator states and what frame it's processing. The further from the output the plugin is, the larger this mismatch is, and the worse the slippage is.

This problem can be solved similarly to the one described above, by compensating each plugin's oscillators as needed in the case where the engine is restarting.

As previously mentioned, these problems took a long to show up, and even when they did it wasn't simple to verify them, because of time uncertainty. The most reliable tool for observing them was an invention I should have thought of long ago: a Freeframe plugin that simply writes its parameter to the input frame, as text. If you modulate the parameter with an oscillator, you're now stamping each output frame with an unambiguous record of how that oscillator's state changed over time. Most importantly, you're seeing the value the oscillator had when its parent plugin processed that frame, regardless of any time shifts due to pipeline backlog within the engine.

The plugin needs extra parameters for controlling the position of the text with the frame, so that different instances don't overwrite each other's text, but other than that it's totally straightforward. I'm thinking of making a similar plugin that draws the parameter as a waveform, like an oscilloscope.

Tests also show that even within a single plugin, oscillators may slip (become out of phase with each other), though the slippage is of a much lower order of magnitude. Whether they slip or not depends on the numerical relationship between their respective frequencies. For example:


Freq A Freq B Slippage
.1 .05 none
.1 .0125 none
.1 .0333333 .0002 after 2 million frames


This type of slippage is due to the use of floating-point in the oscillator, so there's probably not much to be done about it, but more research is needed.

Thursday, December 29, 2011

Fauve colorizing plugin released

Good news everyone! The FFRend project has released an exciting new Freeframe plugin that colorizes video in the style of Fauvism! Some still images demonstrating its effect are shown below.





The second example is taken from this short (15 second) demo clip.

The Fauve plugin is packaged in ckffplugs-1.0.13, available for download here:
http://ffrend.sourceforge.net/download.html
(Freeframe Plugins, Binary Release)
And for those who care about such things the source is there also.

The plugin is a visualization of a histogram but using pseudocolor instead of the customary graph. Pixels are colored according to the popularity of their original colors in the input image, such that more common input colors translate to brighter output colors. The effect enhances edges, adds texture, simplifies and drastically alters the palette, and reduces spatial cohesion. In Luma mode, the plugin:

1. Calculates the frame's luma histogram.
2. Replaces each pixel with a greyscale value corresponding to that pixel's rank in the luma histogram. The histogram is normalized so that pixels in the top rank are assigned white in the output, while pixels in lower ranks receive proportionally darker values.

RGB mode is similar, but does the above operations separately for each color channel, resulting in color output instead of greyscale.
can read about it how it works.

No doubt André Derain is rolling in his grave...



André Derain, The Houses of Parliament (1906)

Tuesday, December 27, 2011

FFRend adds playlist feature: handy for installations

The latest version of FFRend (2.2.01) introduces a playlist, better integrates the load-balancing feature, improves memory management, and fixes a number of bugs.

FFRend can now automatically open a series of projects, also known as a playlist. This feature is primarily intended to allow unattended use of FFRend, e.g. in an installation or gallery setting, though it's also helpful for VJs who need a break during a show. The user specifies the list of projects, the interval between project changes, whether the list should be played in sequential or random order, and whether it should loop. The playlist can be passed via the command line, in which case FFRend starts up with the playlist playing. This is particularly useful in combination with the new /fullscreen command-line option, which causes FFRend to start up full-screen on whichever monitor contains the output window.

The Load Balance feature was converted from a modal dialog to a control bar, allowing load to be monitored continuously during normal operation. The command is now View/Load Balance instead of Edit/Load Balance.

FFRend's handling of low memory conditions was improved, and advanced options were added which allow memory use to be limited in various ways. This can be helpful when working with large frame sizes.

Download:
http://ffrend.sourceforge.net/download.html

Release Notes:
http://ffrend.sourceforge.net/relnotes.html

FFRend (Freeframe Renderer) is a free, open-source renderer for Freeframe V1 plugins. It allows you to chain any number of plugins together in any type of routing, automate their parameters using oscillators, and display the output full-screen and/or record it to a file. FFRend has a modular interface, includes MIDI and dual-monitor support, supports plugin authoring via metaplugins, and leverages multicore CPUs by running each plugin in its own thread.

Wednesday, December 21, 2011

Bifurcations: digital emulations of analog video feedback

Lately I've been experimenting with digital emulations of analog video feedback, with interesting results. I enclose a link to sample clips below. All the projects in this family share the constraint of having no "content" (i.e. source plugins) other than Radar and WaveGen. No clips or Whorld, just Radar, WaveGen, and feedback (usually multiple luma feedback loops with different effects on each loop). I'm particularly excited about "Starry Night" and "Fireplace".

Bifurcations

However there's an unexpected problem. I'm finding that I can't get the kind of results shown in these clips without using some kind of mirror, kaleidoscope, or similar effect. I get the best results by automating the scale parameters of IscHypnotic. This is contrary to what I know about analog feedback. For analog feedback all I typically need is:

1. Looping: in analog this is created by pointing the video camera at the monitor, but in digital the equivalent is e.g. an alpha mixer, luma key, or chroma key with one of its inputs looped back from the output.

2. Decay offset: in analog this is usually accomplished with a TBC, but in digital various types of color or luma shifts can be substituted.

3. Motion: in analog, the motion typically comes from moving the camera, but in digital a simple pan/spin/zoom should in theory work.

But when I try digital setups this simple, I don't get interesting results unless I make the motion component much more complicated. This isn't necessarily bad, but it does give the output a characteristic look (radial, or at least symmetrical). In the case of IscHypnotic it also introduces significant pixelization, which is one of the main objections to digital feedback.

Another interesting result is enclosed below. The FFRend project that generated this image always converges on the state shown. Perturb it all you like, and it either breaks completely or converges again. The branches are a perfect demonstration of the role of bifurcation in natural systems. If you ever had any doubts that nature is chaotic, this should dispel them.



But why does this particular project do it? I don't have any other projects that create this shape, and I can't get it to work without IscHypnotic either. It's all very puzzling. What is the role of the mirror/kaleidoscope/hypnotic in all this? The "Fireplace" clip suggests that there might be a way around it.

Examine the texture in the "Etched Chaos" clip. At some points it resembles feathers, or scales. Again it's clearly organic, and reminiscent of Mandelbrot and other fractals. There's a critical tipping point where it starts "budding" like a plant, but then it quickly exhausts itself and the moment slips away. Could it be possible to stabilize it?

Sensitivity to initial conditions is just the nature of chaos. I remember from the analog days that the feedback setups were absurdly sensitive. It was fussy work that required patience and a steady hand, and some people were better at it than others. There were magical cameras, magical monitors, even magical cables. Sometimes two cameras of the exact same model behaved drastically differently for no obvious reason. Nonetheless people managed to stabilize intense bifurcations for minutes or hours.

Back in the day, analog feedback purists often told me that I would never get similar results digitally. Electric Sheep was an interesting counter-example, but wasn't directly comparable since it's not even close to being a real-time process. I always held that it was just a question of time, and that parallel processing and Moore's law would eventually catch up. All of these new projects run at 30 FPS at XGA resolution on an i7 PC. The results convince me (and some of my most hard-headed analog purist friends) not only that it's possible to convincingly emulate analog feedback in real time, but that digital feedback has the potential to exceed analog feedback in terms of aesthetic variety.