Unsecurities Lab #2: Landscape Defence


Unsecurities Lab 2

The central provocation for this cycle was LUMI, an 18-minute film by artist-film-maker Abelardo Gil-Fournier, media theorist Jussi Parikka that imagines an artificial intelligence trained on archival snow and ice imagery and tasked with “repairing” a damaged cryosphere. Projected across the suite’s floor-to-ceiling 180° screens, LUMI was deployed as a ‘research environment’. Facilitator Nathan Jones, introducing the day alongside Gil-Fournier, proposed that “art is uniquely capable of unsettling assumptions and, just as crucially, of re-calibrating the turbulence it provokes.” In this case, the work enabled reflection not just on what artificial intelligence might do, but on how agency behaves when unpredictable activity in natural and computer processes intersect: an increasingly urgent question for environmental and cyber-physical security systems.

Twenty-four invited specialists took part, drawing together disciplines rarely brought into direct dialogue. Participants included: Carolyn Pedwell (AI, affect theory, feminist philosophy), Rob Lamb (environmental risk and civil engineering), Mark Wright (experimental technology and immersive media), Delphine Grass (translation studies, biosemiotics), Michael Aspinall (space weather monitoring), Saskia Vermeylen (critical legal theory, postcolonial law), Lena Podetz (machine learning and data security), Rolien Hoyng (urban digital infrastructure), Jonathan Gray (data politics, digital methods), Ben Grubert (AI start-up leadership), David Parkes (glaciology), Kwasu Tembo (speculative fiction, Afrofuturism), Charlie Gere (digital aesthetics and philosophy), Suzi Illic (coastal dynamics, sediment modelling), Diego Moral Pombo (ice sheet geodesy), Basil Germond (maritime security and geopolitics), Bill Oxbury (defence strategy, mathematics), Carl Green (engineering), and Alex Bush (eco-acoustics, AI), among others.

Unlike conventional interdisciplinary workshops, the Lab avoids simple “exchange of expertise.” Instead, it entangles fields at the table level. Participants are invited to speak through the provocations raised by the artwork, generating insights that are transcribed in full and later parsed by large-language-model tools—so that the conversation itself becomes a dataset for further reflection and analysis.

The morning session was structured around a “threat-profiling sprint,” where participants collaboratively analysed LUMI’s speculative scenario: in particular exploring the complex of values, motivations, means, and potentialities of the non-human antagonists. Together participants unearthed how distorted training data, over-literal AI instruction, or the seductive “socialisation of nostalgia” around environment restoration might contribute to cascading environmental and geopolitical risks.

In the afternoon, the group shifted from critique to speculative repair. Using film stills, field notes, and quotations from deeptech and philosophy, participants produced scenario sketches in which non-human actors gained memory, motive, and standing within possible governance models. Tables worked across disciplinary boundaries, often deliberately misaligning technical logics with affective, cultural or ecological ones to produce unexpected insights. The intention was to ask: What happens when every environment is directly coupled to a technological system—or even carries security responsibility for itself? In doing so, the Lab created space to speculate and experiment conceptually with the direction of emergent technologies before they formalise.

Participants described the experience as “melancholic,” “visually beautiful, but ethically ambiguous.” One observed, “How to get stability out of a set of chaotic data inputs is a familiar challenge in predictive modelling.” The ambiguity, non-linearity and sensory atmosphere of LUMI helped participants stay with the complexity of unclear agency. Rather than simplifying risk, the Lab enabled sustained reflection on how AI, climate systems, and decision-making interlocl, particularly in contexts where no single actor is in control. Unsecurities Lab #2 surfaced deep concerns around mission drift in AI and environmental modelling. Participants questioned localised material approaches to global issues, raising broader questions about purpose, agency, and accountability in emerging technologies that see (and think) at planetary scales.

The Lab offered a rare space for collaborative inquiry into how environmental, technological, and epistemological systems are becoming entangled in ways that generate new and sometimes unstable forms of knowledge. In doing so, it brought to the surface key tensions between automation and judgment, aesthetics and oversight, repair and governance.

In the coming months, the full transcript will be analysed to extract insights for future policy and research interventions. This material will contribute to an evolving Planetary Threat-and-Repair Archive, and will inform the next iteration of the programme - Unsecurities Lab #3—which will explore natural ecologies as critical national infrastructures.

Back to News