For university

My Contribution

I wrote the following major system components:

    1) A multi-threaded, load balancing MPEG-2 video decoder engine, featuring:
    • Automatic memory management & caching
    • Level-Of-Detail support and seamless transitions
    • Continuous playback or shot looping (given cut information)
    • Asynchronous loading and destruction

    [Initial tests indicate it can play over 500 videos simultaneously on one computer (with 2 HT CPUs and 1GB of RAM at the lowest LOD). TVisionarium is capable of displaying a couple of hundred videos without any significant degradation in performance, but there's so much still to optimise that I would be surprised if it couldn't handle in excess of 1000.]

    With my latest optimisations, TVisionarium is able to play back 1000 shots simultaneously!
    While profiling the system, total CPU usage averages around 90-95% on a quad-core render node!
    This indicates that those optimisations have drastically minimised lock contention and support far more fluid rendering.
    Have a look at TVis in the following video:

    This is an in-development 'video tube' test of the video engine:
    (Watch it on youtube.com to leave comments/rate it if you like.)

Video Behind the Scenes

1000 videos can be seen playing back simultaneously!

This is a preview video produced by iCinema:

Events

T_Visionarium was officially launched on 08/01/2006 as part of the 2008 Sydney Festival. Please read my blog post about it. Here are some pictures:

The festival banner:

Crowd before the speeches:

The digital maestros (Matt McGinity & I):

T_Visionarium (AKA Project TVisionarium Mk II)

This series of pages summarises the contribution I made to TVisionarium Mk II, an immersive eye-popping stereo 3D interactive 360-degree experience where a user can search through a vast database of television shows and rearrange their shots in the virtual space that surrounds them to explore intuitively their semantic similarities and differences.

It is a research project undertaken by iCinema, The iCinema Centre for Interactive Cinema Research at the University of New South Wales (my former uni) directed by Professor Jeffrey Shaw and Dr Dennis Del Favero. More information about the project itself, Mk I and the infrastructure used, is available online.

I was contracted by iCinema to develop several core system components during an intense one month period before the launch in September of 2006. My responsibilities included writing the distributed MPEG-2 video streaming engine that enables efficient clustered playback of the shots, a distributed communications library, the spatial layout algorithm that positions the shots on the 360-degree screen and various other video processing utilities. The most complex component was the video engine, which I engineered from scratch to meet very demanding requirements (more details are available on the next page).

Luckily I had the pleasure of working alongside some wonderfully talented people: in particular Matt McGinity (3D graphics/VR guru), as well as Jared Berghold, Ardrian Hardjono and Tim Kreger.

GPS-controlled Autonomous Earth Driver

My major project for PHYS2601 'Computer Applications 2' in 2003. You can read the actual report (PDF) or report (broken Word-exported HTML), which details the design, electronics, firmware and testing.

Genetic Programming: 3D Visualisation in Python

This is a GUI frontend to the genetic programming assignment given in this subject. The aim is to evolve a wall-following robot. The program provides multiple visualisations of the process. It was written with Janice Leung - many thanks for the beautiful widgets! Developed on (but not for) Linux using Python and its bindings & add-ons: PyQt, PyOpenGL, PIL and psyco. README available. It contains more information about the code used to render the robot & world.

       

AusAir

Click one of the following images to see the larger version:

Real-time detection of commercials on television

For my undergraduate honours thesis I conducted research into the unbuffered real-time detection of commercials on television with a view to muting the volume when ads are being broadcast. The research itself dealt with examining the features required to enable robust real-time detection. I developed a sophisticated video analysis and processing framework to underpin experimentation and compilation of results.

The following screenshots show the system running live (click on one to see the full-res image):

TVisionarium Mk II (AKA Project T_Visionarium)

An inside view into iCinema's Project T_Visionarium:
(We tend to drop the underscore though, so it's referred to as TVisionarium or simply TVis).

 * NEW: EXHIBITION PIX *

This page summarises (for the moment below the video) the contribution I made to TVisionarium Mk II,
an immersive eye-popping stereo 3D interactive 360-degree experience where a user can
search through a vast database of television shows and rearrange their shots in
the virtual space that surrounds them to explore intuitively their semantic similarities and differences.

Syndicate content