A rise of machine listening algorithms in everyday interactions has impacted not only how we interact with technology, but how we interact with each other. For an example close to home: try retaining your patois/accent when your smart speaker – the nearest thing to a cohabitant or a carer – won’t let you order food with it.

This series of works, still in progress, explores how we distort our communications so as to construct and relate to digital models of self.

What are the social artifacts which arise from contorting ourselves to meet the gaze of the machine?

There are some other iterations of this piece which will be added soon, and I hope to keep this page as a kind of working document for the project 🙂

Some important notes: the nature of the ‘machine’ in question has been discussed and well challenged to great length by the Algorithmic Justice League, as well as several other brilliant labs to numerant to namecheck here. There is, however, very little work investigating the harms and biases encoded and deployed in audio-based machine learning tools.