In a distant future, humanity has retreated underground to escape increasingly inhospitable surface conditions. Here, in subterranean grottos, the Storytellers safeguard fragments of the past. But they don't merely preserve these artefacts—they breathe new life into them through a process called Imaginative Restoration.
Imaginative Restoration: Rewilding Division is an immersive installation that invites participants to step into the role of a Storyteller. Your mission? To interact with and creatively restore damaged archival films from the National Film and Sound Archive of Australia (NFSA). As a Storyteller in the Rewilding Division you work to dream up and repopulate the scenes with Australian flora and fauna, by hand drawing the creatures you can imagine, in live time you will see them enter the footage of the film, adding colour to the black and white scenes of the past.
Storytellers is the result of an exploratory collaboration between the National Institute of Dramatic Arts (NIDA), the National Film and Sound Archive of Australia (NFSA) and the School of Cybernetics at the Australian National University (ANU). It emerged from a workshop held in Canberra during July 2024 where experts in dramatic writing, props and effects, curation, and digital technologies came together to explore the future of dramatic arts creation, recording, and archiving in the age of generative AI.
See SETUP.md.
This code repository contains the software for the project. Code in this repo by @benswift, but others have contributed sother significant work to the overall project---writing, set design & build, archival content, etc. See the credits below.
It's a web app, powered by Ash/Phoenix and written Elixir and hosted on fly.io.
Note: there was a previous version of the project using a wholly different
tech stack, running CUDA-accelerated models locally on an NVIDIA Jetson Orin
AGX. That code is still in the repo, but it's in the jetson
branch. It's not
related (in the strict git history-sense) to the current branch, so if you want
to merge between them you'll have a bad time. But there's some interesting stuff
in that codebase as well, and archives are about what actually happened, not
just the final (retconned) story about how we got here.
This creative installation was made possible by a collaboration of the ANU School of Cybernetics, the National Film and Sound Archive and the NIDA Future Centre. Brought to life by:
- Charlotte Bradley
- Joe Hepworth
- Daniel Herten
- Ripley Rubens
- Beth Shulman
- Ben Swift
- Lily Thomson
- Marcelo Zavala-Baeza
-
Video loop: Annette Kellerman Performing Water Ballet at Silver Springs, Florida (1939) courtesy of the National Film and Sound Archive
-
Background music: Soundflakes - Horizon of the Unknown by SoundFlakes, CC BY 4.0.
-
Text-to-Image model: Mou, Chong and Wang, Xintao and Xie, Liangbin and Wu, Yanze and Zhang, Jian and Qi, Zhongang and Shan, Ying and Qie, Xiaohu, T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models, hosted on Replicate
-
Object detection model: Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu, Florence-2: Advancing a unified representation for a variety of vision tasks, hosted on Replicate
-
Background removal model: Carve, Tracer b7, finetuned on the CarveSet dataset, hosted on Replicate
Except where otherwise indicated, all code in this repo is licensed under the MIT licence.