Aviation Mapper is LIVE! Click here to use it.
The Aviation Mapper launch video
Running live at Sydney Airport streaming AvMap via 3G mobile Internet
Running the Aviation Mapper desktop app
Presentation at Dorkbot Sydney Finale 2011
Presentation at Ruxcon 2011: "Hacking the wireless world with Software Defined Radio
Here are some excerpts of the presentation Matt Robert and I gave at the October 2010 meetup of Dorkbot Sydney.
If you wish to see all of the photos from the set-up phase prior to the presentation on the roof of my apartment block, please have a look at the album in my gallery.
UPDATE: NISRP has now been added to the offical Winamp plugin catalogue! |
More information is available on my wiki.
I a gave presentation at Dorkbot Sydney (24/02/2009) on the Eyesweb Visual Programming Language. It was an overview that exemplified some cool things you could do using live video, iPhones (with accelerometers), mrmr, OSC, and multiple Eyesweb nodes on a network.
I wrote the following major system components:
[Initial tests indicate it can play over 500 videos simultaneously on one computer (with 2 HT CPUs and 1GB of RAM at the lowest LOD). TVisionarium is capable of displaying a couple of hundred videos without any significant degradation in performance, but there's so much still to optimise that I would be surprised if it couldn't handle in excess of 1000.]
With my latest optimisations, TVisionarium is able to play back 1000 shots simultaneously!
While profiling the system, total CPU usage averages around 90-95% on a quad-core render node!
This indicates that those optimisations have drastically minimised lock contention and support far more fluid rendering.
Have a look at TVis in the following video:
This is an in-development 'video tube' test of the video engine:
(Watch it on youtube.com to leave comments/rate it if you like.)
1000 videos can be seen playing back simultaneously!
This is a preview video produced by iCinema:
09/01/2007 - SBS World News:
August 2006 - Channel Nine News:
This series of pages summarises the contribution I made to TVisionarium Mk II, an immersive eye-popping stereo 3D interactive 360-degree experience where a user can search through a vast database of television shows and rearrange their shots in the virtual space that surrounds them to explore intuitively their semantic similarities and differences.
It is a research project undertaken by iCinema, The iCinema Centre for Interactive Cinema Research at the University of New South Wales (my former uni) directed by Professor Jeffrey Shaw and Dr Dennis Del Favero. More information about the project itself, Mk I and the infrastructure used, is available online.
I was contracted by iCinema to develop several core system components during an intense one month period before the launch in September of 2006. My responsibilities included writing the distributed MPEG-2 video streaming engine that enables efficient clustered playback of the shots, a distributed communications library, the spatial layout algorithm that positions the shots on the 360-degree screen and various other video processing utilities. The most complex component was the video engine, which I engineered from scratch to meet very demanding requirements (more details are available on the next page).
Luckily I had the pleasure of working alongside some wonderfully talented people: in particular Matt McGinity (3D graphics/VR guru), as well as Jared Berghold, Ardrian Hardjono and Tim Kreger.
I fixed the lighting calculations and thought I would use a built-in texture:
Here is a fly-through of the standard tornado simulation with some pretty filaments:
Shortly after the presentation day, I ripped out the original physics code that someone (who shall not be mentioned!) had written in the minutes prior to the presentation and replaced it with more 'physically correct' code:
A little something I made in my spare time:
(More details coming later...)
Thanks to the generosity of Aras Vaichas, I came into possesion of an old (1992) 60x8 dual-colour LED display. As it was just the display itself (no manual, instructions, software, etc) I set about reverse engineering the board. Using my multimeter I re-created the schematic for the board and found all the relevant datasheets online. Having figured out how to talk to the display, I interfaced it via the parallel port and wrote some control software for it. Once I could display various test patterns (multi-colours sine waves), I 'net-enabled' the software so that the display could be controlled over a network via UDP packets - the resolution is so low that the entire LED configuration fits into a single packet! Finally, I wrote a plugin for Winamp that streams the frequency analysis of the playing song to the display, which produces results like this:
Using my modified version of ffdshow, which sends a video's motion vectors via UDP to an external application, I visualised the motion vectors from The Matrix: Reloaded inside my fluid simulation. The grid resolution is set based upon the macro-block resolution in the video sequence and each type-16x16 motion vector controls one spatially-matching point on the velocity grid. The following visualisation is taken from the scene where they are discussing the threat to Zion while inside the Matrix before Neo senses that agents are coming (followed by Smith) and tells the ships' crews to retreat.
The final video is on YouTube, with the MVs overlaid on top of the source video.
This is the new-and-improved fluid simulation in action. I'm perturbing the 'blue milk' with my mouse. Watch for the darker region form and expand behind the point of perturbation. Due to finer resolution of the velocity grid, the linear artifacts apparent in the earlier version have disappeared and it now looks smooth in all directions.
Velocity-grid-based 2D fluid simulation with effects that interestingly enough resemble Navier-Stokes simulations (well, a little anyway).
To more carefully study the effects of reversing motion vector directions, I created a 'control' video of me making particular motions at different speeds. You can witness the results:
The following snippet of the Burley Brawl from The Matrix: Reloaded has been passed through my hacked version of libavcodec to reverse the direction of each motion vector:
The use of motion vectors for motion compensation in video compression is ingenious - another testament to how amazing compression algorithms are. I thought it would be an interesting experiment to get into the guts of video decoder and attempt to distort the decoded motion vectors before they are actually used to move the macro-blocks (i.e. before they affect the final output frame). My motivation - more creative in nature - was to see what kinds of images would result from different types of mathematical distortion. The process would also help me better understand the lowest levels of video coding.
In the end I discovered the most unusual effects could be produced by reversing the motion vectors (multiplying their x & y components by -1). The stills and videos shown here were created using this technique. Another test I performed was forcing them all to zero (effectively turning off the motion compensation) but the images were not as 'compelling'.
Here are some stills with Neo and The Oracle conversing from The Matrix: Reloaded after the video's motion vectors have been distorted by my hacked version of libavcodec:
This is the presentation we gave to demonstrate Teh Engine during the final Computer Graphics lecture in front of a full lecture hall of ~300 students.
(Thanks Ashley "Mac-man" Butterworth for operating the camera.) Please excuse my 'ah's and 'um's - it happens when I'm really tired. Was up for >48 hours.
It should be noted that I fixed the physics so the car behaves properly. Please read the previous page for more detail about the engine.
Inspired by the many animations used in documentaries dealing with our Solar System, the Universe and Man's exploration of it, I created an animation of one of the Voyager space probes leaving Earth.
During lunchtimes at high school I created many time lapse sequences of the harbour and the evolution of cloud formations. I filmed on a video camera at then sampled one frame every X seconds to achieve the speed up.
The frames of the video above were actually captured on my digital still camera (for extra quality) using the accompanying remote capture software.
I own a Rocky Mountain Hammer mountain bike - it is certainly the best way to get around the city. I have had it stolen once (I was bike-jacked, which involves being pushed off while riding and then being punched in the face) but was very lucky as the police found it some months later. Thus we have been re-united!
Inspired by a music video (which I can now no longer recall) I attempted to replicate this special effect that involves sliding the background slower than the keyed moving foreground to emphasise the motion. The two separate layers are composited as opposed to just doing a pan across the original footage. Considering I'm not using calibrated video equipment and the lighting conditions changed throughout the takes, I think the result is okay.
I have published many videos on YouTube. The most popular one (in fact the first one I ever posted) was only viewed ~1,750 times. I'm not having delusions of grandeur, but I was wondering what sort of video could be more popular? Obviously if it's original content then that would mean personal success. However for the sake of this experiment, what subject would quickly attract viewers? One evening I was watching David Letterman and the answer struck me. Two words: Paris Hilton. (Forgive me: I never thought her name would be perpetuated in my webspace. I have my own...arhem...negative opinion on her rise to 'fame', what she symbolises in this age, etc, etc - but that's not the point of this experiment.)
I posted the following three videos (rather funny excerpts from Letterman). The first of the videos' view count quickly shot into the mid-thousands and now has come to relative rest at 20,000! That's popular culture (and many other things) for you.
While in Brazil, I used a digital still camera to create some stop-motion animations:
This is the finale of the year 2000 Prize Giving Video at my old high school. I attempted to create one of the Earth zoom-in/out sequences from scratch. They have been shown in many films, TV series and computer games. It features at the beginning of the video.
Some weekends ago I had access to a blue screen and performed a quick chroma key experiment.
Here is a neat trick you can do to make oneself see-through:
I also saw this as an opportunity to re-use an animation I created back in 2000 (it's not easy balancing your entire body on a small blue box!):
Although I like the basic particle-line aesthetic presented in the previous pictures and videos, I felt it time to add a bit of colour, texture and lighting to the simulation.
Here you can see me tearing my face up in a lit environment:
I re-enabled the skybox in Teh Engine and used an image of the full-saturation hue wheel to give the particles a 'random' colour:
(I think this is reminiscent of the Sony Bravia LCD TV ad!)
Here is a newer video of tearing the cloth and enabling the tornado simulation after reducing the cloth to small fragments: