Aviation Mapper is LIVE! Click here to use it.
The Aviation Mapper launch video
Running live at Sydney Airport streaming AvMap via 3G mobile Internet
Running the Aviation Mapper desktop app
Presentation at Dorkbot Sydney Finale 2011
Presentation at Ruxcon 2011: "Hacking the wireless world with Software Defined Radio
UPDATE: NISRP has now been added to the offical Winamp plugin catalogue! |
More information is available on my wiki.
I wrote the following major system components:
[Initial tests indicate it can play over 500 videos simultaneously on one computer (with 2 HT CPUs and 1GB of RAM at the lowest LOD). TVisionarium is capable of displaying a couple of hundred videos without any significant degradation in performance, but there's so much still to optimise that I would be surprised if it couldn't handle in excess of 1000.]
With my latest optimisations, TVisionarium is able to play back 1000 shots simultaneously!
While profiling the system, total CPU usage averages around 90-95% on a quad-core render node!
This indicates that those optimisations have drastically minimised lock contention and support far more fluid rendering.
Have a look at TVis in the following video:
This is an in-development 'video tube' test of the video engine:
(Watch it on youtube.com to leave comments/rate it if you like.)
1000 videos can be seen playing back simultaneously!
This is a preview video produced by iCinema:
This series of pages summarises the contribution I made to TVisionarium Mk II, an immersive eye-popping stereo 3D interactive 360-degree experience where a user can search through a vast database of television shows and rearrange their shots in the virtual space that surrounds them to explore intuitively their semantic similarities and differences.
It is a research project undertaken by iCinema, The iCinema Centre for Interactive Cinema Research at the University of New South Wales (my former uni) directed by Professor Jeffrey Shaw and Dr Dennis Del Favero. More information about the project itself, Mk I and the infrastructure used, is available online.
I was contracted by iCinema to develop several core system components during an intense one month period before the launch in September of 2006. My responsibilities included writing the distributed MPEG-2 video streaming engine that enables efficient clustered playback of the shots, a distributed communications library, the spatial layout algorithm that positions the shots on the 360-degree screen and various other video processing utilities. The most complex component was the video engine, which I engineered from scratch to meet very demanding requirements (more details are available on the next page).
Luckily I had the pleasure of working alongside some wonderfully talented people: in particular Matt McGinity (3D graphics/VR guru), as well as Jared Berghold, Ardrian Hardjono and Tim Kreger.
I fixed the lighting calculations and thought I would use a built-in texture:
Here is a fly-through of the standard tornado simulation with some pretty filaments:
Shortly after the presentation day, I ripped out the original physics code that someone (who shall not be mentioned!) had written in the minutes prior to the presentation and replaced it with more 'physically correct' code:
A little something I made in my spare time:
(More details coming later...)
Thanks to the generosity of Aras Vaichas, I came into possesion of an old (1992) 60x8 dual-colour LED display. As it was just the display itself (no manual, instructions, software, etc) I set about reverse engineering the board. Using my multimeter I re-created the schematic for the board and found all the relevant datasheets online. Having figured out how to talk to the display, I interfaced it via the parallel port and wrote some control software for it. Once I could display various test patterns (multi-colours sine waves), I 'net-enabled' the software so that the display could be controlled over a network via UDP packets - the resolution is so low that the entire LED configuration fits into a single packet! Finally, I wrote a plugin for Winamp that streams the frequency analysis of the playing song to the display, which produces results like this:
Using my modified version of ffdshow, which sends a video's motion vectors via UDP to an external application, I visualised the motion vectors from The Matrix: Reloaded inside my fluid simulation. The grid resolution is set based upon the macro-block resolution in the video sequence and each type-16x16 motion vector controls one spatially-matching point on the velocity grid. The following visualisation is taken from the scene where they are discussing the threat to Zion while inside the Matrix before Neo senses that agents are coming (followed by Smith) and tells the ships' crews to retreat.
The final video is on YouTube, with the MVs overlaid on top of the source video.
This is the new-and-improved fluid simulation in action. I'm perturbing the 'blue milk' with my mouse. Watch for the darker region form and expand behind the point of perturbation. Due to finer resolution of the velocity grid, the linear artifacts apparent in the earlier version have disappeared and it now looks smooth in all directions.
Velocity-grid-based 2D fluid simulation with effects that interestingly enough resemble Navier-Stokes simulations (well, a little anyway).
The use of motion vectors for motion compensation in video compression is ingenious - another testament to how amazing compression algorithms are. I thought it would be an interesting experiment to get into the guts of video decoder and attempt to distort the decoded motion vectors before they are actually used to move the macro-blocks (i.e. before they affect the final output frame). My motivation - more creative in nature - was to see what kinds of images would result from different types of mathematical distortion. The process would also help me better understand the lowest levels of video coding.
In the end I discovered the most unusual effects could be produced by reversing the motion vectors (multiplying their x & y components by -1). The stills and videos shown here were created using this technique. Another test I performed was forcing them all to zero (effectively turning off the motion compensation) but the images were not as 'compelling'.
Here are some stills with Neo and The Oracle conversing from The Matrix: Reloaded after the video's motion vectors have been distorted by my hacked version of libavcodec:
Although I like the basic particle-line aesthetic presented in the previous pictures and videos, I felt it time to add a bit of colour, texture and lighting to the simulation.
Here you can see me tearing my face up in a lit environment:
I re-enabled the skybox in Teh Engine and used an image of the full-saturation hue wheel to give the particles a 'random' colour:
(I think this is reminiscent of the Sony Bravia LCD TV ad!)
Here is a newer video of tearing the cloth and enabling the tornado simulation after reducing the cloth to small fragments:
To make the simulation even more fun I made it possible to tear the cloth by shooting a red bullet at the grid. The red sphere is pulled downward under the influence of gravity and breaks any constraints within its radius.
The early videos follow - click to download them. The later (and much better) videos can be found throughout the following pages.
Tearing the cloth |
Colouring the cloth |
Running the tornado with connected particles |
Blowing in the wind |
External view of tearing the cloth |
Zoom from view of the tornado to another |
This video demonstrates the tornado in action:
I have been continually developing a Verlet-integration based particle system inside Teh Engine and have produced a number of interesting results. The two main themes of simulated phenomenon are tornados and cloth. You can read more about these individual experiments in the next sections, as well as watching videos of the results.
An excellent resource for Verlet-integration can be found at Gamasutra.
Here are some stills:
An inside view into iCinema's Project T_Visionarium:
(We tend to drop the underscore though, so it's referred to as TVisionarium or simply TVis).
This page summarises (for the moment below the video) the contribution I made to TVisionarium Mk II,
an immersive eye-popping stereo 3D interactive 360-degree experience where a user can
search through a vast database of television shows and rearrange their shots in
the virtual space that surrounds them to explore intuitively their semantic similarities and differences.