Interacting with machines without losing what makes us human

Right now, I'm using AI to explore new ways of thinking about human-computer interaction, replacing concepts of control and manipulation with resonance, sensing and belonging.

My research explores how interaction with technology shapes how we see ourselves.

We are at risk of turning into the machines that we create.
Iain McGilchrist

Any interface defines potential: what actions the person can take, and how the system can respond. The actions the interface cares about are a tiny fraction of what that person can do elsewhere. In this way, the system defines a model of what a human is, a vastly simplified model. For example, my laptop is designed for a human with fingers, eyes and ears but is indifferent to my nose, the expression on my face, the scars on my skin, my need for warmth. The question is: when I use my laptop, do I start to conceive of myself as this model.

I create art that explores the impact of interactive system design and speculates on how it could be different. If familiar technology limits which parts of being a human are considered relevant then how can we invent new forms of interaction that unleash everything else? If consumer technology embodies values of passive consumption and predetermined expression, then what would systems look like that embrace activism and open-endedness? (I write about this in depth in a recent paper.)

I approach interaction from an embodied perspective by which I mean the non-verbal parts of our intelligence that do not rely on the manipulation of symbolic abstractions. Things like dance, music, social dynamics, beauty, riding a bike - basically that which is difficult to put into words or to model in a formal process. I often collaborate with dancers to create systems that respond to the moving body. Indeed, much of this research has developed through collaborations with other artists on their own journey.

I'm interested in how far such a system can encapsulate a philosophical concept and bring it to life in the minds of its participant. Do ideas infused into the nature of a system become internalised by those who use it?

For example, Cave of Sounds is an ensemble of new instruments that invites anybody to come and play music with strangers. It embodies the idea that the value in a musical encounter emerges through participation. There is no scoring for playing to a prescribed standard. There are real people around whose reactions one can gauge. Compare this to a social network, where the myriad of human responses is often reduced to a single bit of information – the "like" button – and the quantified system of status that ensues.

The disproportionate amount of power wielded by the dominant tech companies is a familiar subject. But the abstractions the systems define - users, posts, content – are often so transparent and ubiquitous that we don't even realise they are there mediating how we think about the system and our role within it. To me, the value of interactive art is in showing how interactivity exists today and probing how it could be different.

The ultimate hidden truth of the world is that it is something we make and could just as easily make differently.
David Graeber

I began this journey when I completed my PhD exploring interactive music systems. I was looking for the interactive equivalent of a musical experience but kept hitting upon a paradox familiar to those designing for creative interaction. For the participant using a system to be creative, they need agency over the outcome of the interaction. But any agency the designer grants to the participant is less agency that designer has over the experience and aesthetic.

What I found interesting is that sometimes people blame the system and other times they blame themselves. Who they blame depends on how the system is presented and framed, and the role they assume. This is significant for creative expression, but also for the everyday technology that mediates our lives.

The interplay between roles, agency, empowerment and enjoyment is central to how I think about our interactions with technology. For example, in Cave of Sounds, visitors are invited to participate as musicians, and the piece is built to support that intimidating prospect while avoiding any fakery. On the other hand, in Post-Truth and Beauty, visitors are invited to participate as observers. They must interact by moving to explore an unfamiliar world of sounds suspended around them, but the piece is framed to avoid suggesting they are in a creative role. However, Movement Alphabet introduced a human performer as the mediator between participant and system. This let us escape the roleplay of human-machine interaction entirely and rest on the richness of human interaction.


Recently, I've been exploring alternatives to the paradigm of command and manipulation as the basis for interacting with computers. Consider the swiping of abstract entities such as icons and windows, or the typing of commands and the expectation of results. Compare this to embodied interaction at its best, a deep conversation or the improvisation between two dancers. The lack of designed abstraction is what keeps it open ended, what keeps people free.

Drawing influence from the writings of the psychiatrist and philosopher Iain McGilchrist, I'm applying AI to explore alternative approaches to devising human-computer interaction rooted in resonance, sensing and belonging. You can see the results in Sonified Body. I've written more about this in a recent journal paper on emergent interfaces.

Updated: 28 Apr 2022

Publications

  • T. Murray-Browne and P. Tigas, “Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers,” Applied Sciences, 11(18): 8531, 2021.
    • pdf
    abstract

    Abstract

    Most human–computer interfaces are built on the paradigm of manipulating abstract representations. This can be limiting when computers are used in artistic performance or as mediators of social connection, where we rely on qualities of embodied thinking: intuition, context, resonance, ambiguity and fluidity. We explore an alternative approach to designing interaction that we call the emergent interface: interaction leveraging unsupervised machine learning to replace designed abstractions with contextually derived emergent representations. The approach offers opportunities to create interfaces bespoke to a single individual, to continually evolve and adapt the interface in line with that individual’s needs and affordances, and to bridge more deeply with the complex and imprecise interaction that defines much of our non-digital communication. We explore this approach through artistic research rooted in music, dance and AI with the partially emergent system Sonified Body. The system maps the moving body into sound using an emergent representation of the body derived from a corpus of improvised movement from the first author. We explore this system in a residency with three dancers. We reflect on the broader implications and challenges of this alternative way of thinking about interaction, and how far it may help users avoid being limited by the assumptions of a system’s designer.

    bibtex
    @article{murraybrowne2021emergent-interfaces,
        author = {Murray-Browne, Tim and Tigas, Panagiotis},
        journal = {Applied Sciences},
        number = {8531},
        title = {Emergent Interfaces: Vague, Complex, Bespoke and Embodied Interaction between Humans and Computers},
        volume = {11},
        year = {2021}
    }
    
  • T. Murray-Browne, Dom Aversano, S. Garcia, W. Hobbes, D. Lopez, P. Tigas, T. Sendon, K. Ziemianin, D. Chapman, “The Cave of Sounds: An Interactive Installation Exploring How We Create Music Together,” in Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 307-310, London, UK, 2014.
    • pdf
    abstract

    Abstract

    The Cave of Sounds is an interactive sound installation formed of eight new musical instruments exploring what it means to create instruments together. Each instrument was created by an individual but with the aim of forming a part of this new ensemble, with the final installation debuting at the Barbican in London in August 2013. In this paper, we describe how ideas of prehistoric collective music making inspired and guided this participatory musical work, both in creation process and in the audience experience of musical collaboration. Following a detailed description of the installation itself, we reflect on the successes, lessons and future challenges of encouraging creative musical collaboration among members of an audience.

    bibtex
    @inproceedings{murray-browne2014cave-of-sounds,
        address = {London, UK},
        author = {Murray-Browne, Tim and Aversano, Dom and Garcia, Susanna and Hobbes, Wallace and Lopez, Daniel and Sendon, Tadeo and Tigas, Panagiotis and Ziemianin, Kacper and Chapman, Duncan},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
        pages = {307-310},
        title = {The {C}ave of {S}ounds: An Interactive Installation Exploring How We Create Music Together},
        year = {2014},
    }
    
  • T. Murray-Browne, D. Mainstone, N. Bryan-Kinns and M. D. Plumbley, “The Serendiptichord: Reflections on the collaborative design process between artist and researcher,” Leonardo 46(1):86-87, 2013
    • pdf
    abstract

    Abstract

    The Serendiptichord is a wearable instrument, resulting from a collaboration crossing fashion, technology, music and dance. This paper reflects on the collaborative process and how defining both creative and research roles for each party led to a successful creative partnership built on mutual respect and open communication. After a brief snapshot of the instrument in performance, the instrument is considered within the context of dance-driven interactive music systems followed by a discussion on the nature of the collaboration and its impact upon the design process and final piece.

    bibtex
    @article{murray-browne2013leonardo,
        author = {Murray-Browne, T. and Mainstone, D. and Bryan-Kinns, N. and Plumbley, M. D.},
        journal = {Leonardo},
        number = {1},
        pages = {86-87},
        title = {The {S}erendiptichord: {R}eflections on the collaborative design process between artist and researcher},
        volume = {46},
        year = {2013},
    }
    
  • T. Murray-Browne. Interactive Music: Balancing Creative Freedom with Musical Development. PhD thesis, Queen Mary University of London, 2012.
    • pdf
    abstract

    Abstract

    This thesis is about interactive music – a musical experience that involves participation from the listener but is itself a composed piece of music – and the Interactive Music Systems (IMSs) that create these experiences, such as a sound installation that responds to the movements of its audience. Some IMSs are brief marvels commanding only a few seconds of attention. Others engage those who participate for considerably longer. Our goal here is to understand why this difference arises and how we may then apply this understanding to create better interactive music experiences.

    I present a refined perspective of interactive music as an exploration into the relationship between action and sound. Reasoning about IMSs in terms of how they are subjectively perceived by a participant, I argue that fundamental to creating a captivating interactive music is the evolving cognitive process of making sense of a system through interaction.

    I present two new theoretical tools that provide complementary contributions to our understanding of this process. The first, the Emerging Structures model, analyses how a participant's evolving understanding of a system's behaviour engages and motivates continued involvement. The second, a framework of Perceived Agency, refines the notion of ‘creative control’ to provide a better understanding of how the norms of music establish expectations of how skill will be demonstrated.

    I develop and test these tools through three practical projects: a wearable musical instrument for dancers created in collaboration with an artist, a controlled user study investigating the effects of constraining the functionality of a screen-based IMS, and an interactive sound installation that may only be explored through coordinated movement with another participant. This final work is evaluated formally through discourse analysis.

    Finally, I show how these tools may inform our understanding of an oft-cited goal within the field: conversational interaction with an interactive music system.

    bibtex
    @phdthesis{murray-browne2012phd,
        Author = {T. Murray-Browne},
        School = {Queen Mary University of London},
        Title = {Interactive music: Balancing creative freedom with musical development},
        Year = {2012}
    }
    
  • T. Murray-Browne, Mark D. Plumbley, “Harmonic Motion: A Toolkit for Processing Gestural Data for Interactive Sound,” In Proceedings of the International Conference on New Interfaces for Musical Expression, pp. 213-216, London, UK, 2014.
    • pdf
    abstract

    Abstract

    Note from Tim: I messed this one up. After submitting the paper, I hit some flaws in the underlying architecture that required a significant rewrite. It was beyond the funding I had available so I had to pull the plug. I learnt my lesson not to submit the paper until the code was finalised.

    We introduce Harmonic Motion, a free open source toolkit for artists, musicians and designers working with gestural data. Extracting musically useful features from captured gesture data can be challenging, with projects often requiring bespoke processing techniques developed through iterations of tweaking equations involving a number of constant values – sometimes referred to as ‘magic numbers’. Harmonic Motion provides a robust interface for rapid proto- typing of patches to process gestural data and a framework through which approaches may be encapsulated, reused and shared with others. In addition, we describe our design process in which both personal experience and a survey of potential users informed a set of specific goals for the software.

    bibtex
    @inproceedings{murray-browne2014harmonic-motion,
        address = {London, UK},
        author = {Murray-Browne, Tim and Plumbley, Mark D.},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
        pages = {213-216},
        title = {Harmonic Motion: A Toolkit for Processing Gestural Data for Interactive Sound},
        year = {2014},
    }
    
  • A. Otten, D. Shulze, M. Sorensen, D. Mainstone and T. Murray-Browne, “Demo hour,” Interactions, 18(5):8-9, 2011.
    bibtex
    @article{otten2011demo-hour,
        author = {Otten, Anthony and Schulze, Daniel and Sorensen, Mie and Mainstone, Di and Murray-Browne, Tim},
        journal = {interactions},
        number = {5},
        pages = {8-9},
        title = {Demo hour},
        volume = {18},
        year = {2011},
    }
    
  • T. Murray-Browne, D. Mainstone, N. Bryan-Kinns and M. D. Plumbley, “The medium is the message: Composing instruments and performing mappings,” in Proceedings of the International Conference on New Instruments for Musical Expression (NIME-11), Oslo, Norway, 2011.
    • pdf
    abstract

    Abstract

    Many performers of novel musical instruments find it difficult to engage audiences beyond those in the field. Previous research points to a failure to balance complexity with usability, and a loss of transparency due to the detachment of the controller and sound generator. The issue is often exacerbated by an audience’s lack of prior exposure to the instrument and its workings.

    However, we argue that there is a conflict underlying many novel musical instruments in that they are intended to be both a tool for creative expression and a creative work of art in themselves, resulting in incompatible requirements. By considering the instrument, the composition and the performance together as a whole with careful consideration of the rate of learning demanded of the audience, we propose that a lack of transparency can become an asset rather than a hindrance. Our approach calls for not only controller and sound generator to be designed in sympathy with each other, but composition, performance and physical form too.

    Identifying three design principles, we illustrate this approach with the Serendiptichord, a wearable instrument for dancers created by the authors.

    bibtex
    @conference{murraybrowne2011nime,
        address = {Oslo, Norway},
        author = {Murray-Browne, Tim and Mainstone, Di and Bryan-Kinns, Nick and Plumbley, Mark D.},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
        date-added = {2011-03-29 11:16:02 +0100},
        date-modified = {2012-04-14 13:39:39 +0000},
        pages = {56-59},
        title = {The medium is the message: Composing instruments and performing mappings},
        year = {2011},
    }
    
  • T. Murray-Browne, D. Mainstone, N. Bryan-Kinns and M. D. Plumbley, “The Serendiptichord: A wearable instrument for contemporary dance performance,” in Proceedings of the 128th Convention of the Audio Engineering Society, London, UK, 2010.
    • pdf
    abstract

    Abstract

    We describe a novel musical instrument designed for use in contemporary dance performance. This instrument, the Serendiptichord, takes the form of a headpiece plus associated pods which sense movements of the dancer, together with associated audio processing software driven by the sensors. Movements such as translating the pods or shaking the trunk of the headpiece cause selection and modification of sampled sounds. We discuss how we have closely integrated physical form, sensor choice and positioning and software to avoid issues which otherwise arise with disconnection of the innate physical link between action and sound, leading to an instrument that non-musicians (in this case, dancers) are able to enjoy using immediately.

    bibtex
    @inproceedings{murray-browne2010aes,
        address = {London, UK},
        author = {Murray-Browne, Tim and Mainstone, Di and Bryan-Kinns, Nick and Plumbley, Mark D.},
        booktitle = {Proceedings of the 128th Convention of the Audio Engineering Society},
        title = {The {S}erendiptichord: {A} wearable instrument for contemporary dance performance},
        year = {2010},
        }
    
  • T. Murray-Browne and C. Fox, “Global expectation-violation as fitness function in evolutionary composition,” in Proceedings of the 7th European Workshop on Evolutionary and Biologically Inspired Music, Sound, Art and Design, Tübingen, Germany, 2009.
    abstract

    Abstract

    Previous approaches to Common Practice Period style automated composition – such as Markov models and Context-Free Grammars (CFGs) – do not well characterise global, context-sensitive structure of musical tension and release. Using local musical expectation violation as a measure of tension, we show how global tension structure may be extracted from a source composition and used in a fitness function. We demonstrate the use of such a fitness function in an evolutionary algorithm for a highly constrained task of composition from pre-determined musical fragments. Evaluation shows an automated composition to be effectively indistinguishable from a similarly constrained composition by an experienced composer.

    bibtex
    @inproceedings{murraybrowne2009,
        address = {T\"ubingen, Germany},
        author = {Murray-Browne, Tim and Fox, Charles},
        booktitle = {Proceedings of the European Workshop on Evolutionary and Biologically Inspired Music, Sound, Art and Design},
        pages = {538-546},
        title = {Global Expectation-Violation as Fitness Function in Evolutionary Composition},
        year = {2009}
    }