I’ve spent the last 18 months working extensively on various forms of social media automation and manipulation. While everyone is now writing about ‘bots’ and ‘trolls’, these concepts are not well understood and the research landscape (especially for social scientists) remains nascent. At the moment, I’m particularly interested in addressing methodological and conceptual issues facing research now. First, what are bots, really? What do competing definitions of the term amongst different groups of scholars tell us, and what are the challenges and the challenges for policy and research posed by competing definitions and understandings of exactly what ‘bots’ are? My colleague Doug Guilbeault and I have a new paper (see ‘Understanding Bots’, below) in which we the examine this very question.
I’m interested in other critical questions: how do these bots really work, and do they really have effects on opinion formation, as is commonly claimed, and how do we best study them? While these questions seem simple, studying disinformation empirically is really, really difficult! I wrote my Master’s thesis at the OII on this topic to try and address some of these questions.
I have done a long study into political automation in Poland, published as a working paper at the OII. I am currently working on a comprehensive overview of bot detection methods (pre-print coming soon).