HUMA: manual under construction basic concepts of time and space

HUMA projects may be presented on 2 dimensional screens, but they are meant to be operated in space. A lot of functions in the system may be hard to understand or seem to make no sense, if one is not fully aware of the concepts along which media space and room space are mapped. The following will explain those concepts and will also look at the software's timewise concepts.


HUMA works with a one to one mapping of media space to room space as shown on the left. Room is not necessarily defined by the real dimensions of the room you show your project in, but by the capabilities of the interfacing device in use. With a motion tracking camera mounted on the ceiling or a laser scanner positioned underneath the screen this may well be the entire room, with a simple infrared distance sensor it will be no room at all, but only points along its light beam. Using the mouse as a "sensor" turns the entire display dimension into the tracked area.
x & y axises of input signals are basically defined as shown in the image: X coordinates increase from left to right and Y coordinates have their maximum at the screen side. This is different from usual computer coordinate systems, that have their origin (0,0) at the top left corner and their maximum at the bottom right: HUMA's 0,0 origin is at the bottom left corner, its max, max coordinate at the top right! Having said that, it should be stated that it's up to you to invert sensor signals or to put up detection systems in a completely different way. Just keep in mind that the basic idea behind this is: you drive a movie forward by approaching the screen. This is the principle behind all forwrd/backward terminology in HUMA and also explains the prominent role of the y coordinate.





.

Allthough we try hard to weaken the restrictions the time based character of video imposes on the possibilities to present and experience it in an interactive way, these restrictions still exist to a certain degree and have to be taken into account. In general the presentation of a HUMA project follows a timeline, or better multiple timelines, as every scene has its own one and scene b must not necessarily follow scene a. Nevertheless, when inside a scene, you are bound to a timeline (referred to as "movie time" in the image on the right). It does not need to be spooled off with a fixed framerate, can change its playback direction at any time etc., but it still is present. Now there are a number of ways to weaken its restrictions: HUMA lets you enter and exit scenes in variable ways, using changeable offsets and custom finishing modes, so that a movie can show continous images, allthough its following no fixed path through its scenes. There are simple templates to attach movie playback to other driving elements than time and last but not least: What HUMA shows does not need to be one preproduced file, but may be pieced together from multiple elements. All stage rendering in HUMA, that is: the processs of compositing seperate elements to one display image takes place in realtime. This gives you the possibility to manipulate these elements before they are rendered to stage. You may simply switch them on and off, change their size and z-ordering (layer) or the transfer mode in which they are blended with other elements. Some special kinds of media (embedded movies and Flash) even offer their own time coordinate system, so that parallel, independent timelines may exist. So, when authoring a HUMA project you will both arrange video segments for continous relinking and prepare these compositing scenarios and rules that control their realtime manipulation. These preparations are done in multiple editors: you set links and paths in the scene bin and the scene editor, you arrange the layout of elements on stage in the media editors, you define the timespan in which media is available to be rendered to stage during import and on the timelines, you set up rules by choosing HUMAmode templates and attaching actions to events.