This paper proposes an understanding of big data and anxiety within Western countries as intimately coproduced and sustained within a technocratic ideological framework. With the rise of neoliberalism, which has shaped the political and economic organisation of Western societies, more ‘efficient’ systems and neopositivist ‘science’ called ‘big data’ have been created. The accumulation of information cannot however be equated with the growth of knowledge, as the data captured by new technologies is used for private ends: the accumulation of capital and control by a small elite. As such, new data can be gathered about subjectivities, bodies and performance, and used to further pressure individuals to fit into preconceived frameworks and identities as a result of the top-down creation of individualised pseudo-problems which can cause anxiety, especially when they are not tackled collectively. The paper navigates different approaches to anxiety and big data, critiquing the technocratic solutions to social problems offered by big data science. Big data will be discussed as a ‘technological’ creation, an infrastructure which defines, represents, conditions, manages, and sustains human subjectivity, corporeal affects, and the organisation of society. The ambivalence of big data will be illustrated, as it is used both as a solution and tool for producing, but also coping with anxiety. All strategies are deeply political, underlined by a feeling of anxiety which ought to be articulated politically, to encourage a cooperative search for new ways of overcoming technological control and fostering care and collective action. Sociology must remain critical in its engagement with big data in order to reveal practices of depoliticisation, quantification, standardisation, monitorisation and sanctioning, and the reduction of social reality and experience to algorithms.
With the development of the internet and portable technological devices, private lives have become increasingly prone to being permeated by excessive and intrusive technologies, the body being transformed into a resource and machine which generates data. The speed and nature of data collection are accelerated to the extent where technocratic control may become incorporated and augmented within everyday life and the body/life itself. Not only has technology become ‘invisible’ in daily routine, but it also helps foster so-called progress, productivity and efficiency for capital accumulation by the elite, to the point where ‘the body is becoming a core product at the frontier of commodiﬁcation’.1 This particular kind of data came to be known as ‘big data’, on the basis of which big data science and methods have developed.
The first use of the term ‘big data’ in an academic journal was in 2003, and over the course of the following five years its use became widespread.2 In a nutshell, big data is ‘a cultural, technological, and scholarly phenomenon that rests on the interplay of technology, analysis, and mythology that provokes extensive utopian and dystopian rhetoric’.3 Examples of the production of big data are: digital CCTV, mobile phones, emails, online banking, clickstream data, sensors embedded into objects, travel passes or barcodes, and social media postings.4 Although there is no consensus over its historical origins, traces of big data can be located even before the rise of the internet. For instance, in the late-nineteenth century, big data was used by International Business Machines (IBM), which was founded as a census tabulating company. They collected personal data, and later profited from and maintained monopoly over information and technology, providing, for example, systems for the identification of Jews to the SS in Nazi Germany.5 In 2014, the Chairman of IBM Europe declared that ‘to realise the promise of big data does not require sacrificing personal privacy’.6 Considering IBM’s track record with regards to balancing ethics and profit, and the history of exploitation that underlies their position as one of the world’s most lucrative companies, warrants a heavy degree of scepticism over such displays of benevolence by its management.
It is necessary to acknowledge how the maintenance and development of big data have become possible. Big data is both science and technology: as technology, it is produced through labour and private property, being used by its owners for their interests; as science, it employs specific methods, rationalities and epistemologies. The material base of big data cannot be separated from the social organisation of society –without the individuals who create technological ‘hardware’ and ‘software’ through labour; without those who buy and use them (for whatever means and ends), and without the elite who monopolise it, big data would not exist in its current form.
Kitchin identifies two approaches to big data methodology: one is empiricism (data speaks for itself) and the other is data-driven science ‘that radically modifies the existing scientific method by blending aspects of abduction, induction and deduction’. Whilst he predicts that the latter approach will win more terrain within academia, he calls for more reflexive data-driven science. However, the claim that big data has developed a new science is a ‘mythological artefact’, as it implies that the ‘precepts and methods of scientific research change as the data sets increase in size’.7 Indeed, its method is simply ‘N = all’, when N is merely an epistemic collective which shares particular features – subjectivities and preferences which, if not understood sociologically, may be misrepresented.8 Big data’s apparent solutions are artefacts to ideological pseudo-problems (presented as the responsibility of structurally marginalised groups) for ideals of scientific progress and economic accumulation.
Data mining and algorithms are marketed as the ultimate source of knowledge, technology allegedly offering the possibility to go beyond human cognition, to offer neutral, objective knowledge based on patterns and correlations. The reliance on algorithms and positivist methods to dismiss metaphysical claims, and the belief that something other than human interpretation can explain how reality really is, form the neopositivist method of big data. As a consequence, hermeneutics is replaced with statistics, and ‘truth’ with data itself.9 In short, big data science is ‘a shallow version of neoempiricism’,10 a ‘chimera’11 which has declared war to theory; an opaque, private enterprise which does not, under current conditions, enhance the growth of knowledge, nor does it allow for contestation from below for a more dialogic science and knowledge.
The use of big data for social analysis superficially assesses behaviour as it ‘appears’ (through the lens of the privileged), without accounting for inequalities and the negotiation of meanings across contexts, leading to an intensification of stereotypes. Those with epistemological authority use means of disseminating knowledge to frame debates and policies in their own terms, often neglecting social problems which do not ‘fit’ the dominant paradigm. For instance, big data cannot, by itself, explain and redress institutional racism: ‘data is not color blind, not gender blind and marketers use it to have ever more precise categories about you’. The translation of cultural clichés into ‘empirically verifiable datasets’ builds biases into the knowledge produced by the subsequent findings.12 An illustrative example is the research done by COSMOS (a cross-institutional, academic research group) in which subjectivities are inferred when using ‘data-light’ tweets (that is, tweets which do not explicitly include references to gender, location, class):
although such data is not present in an explicit manner, the tools available on the COSMOS platform enable it to be inferred with a relatively high level of confidence.13
For sociologists, this affirmation should raise ethical, political, philosophical and sociological concerns. Hacking’s term of ‘making up’ people is useful to explain the above statement: ‘who we are is not only what we did, do, and will do but also what we have done and may do’.14 ‘Making up’ categories based on arbitrarily chosen features of individuals in order to make sense of ‘raw’ data and to define the ‘wholeness’ of personhood misrecognises individual and social changes, possibilities and capabilities. Needless to say, COSMOS’ tools are not ‘neutral’, unbiased and, for that matter, accurate, as social categories are contested, fluid and self-defined. The assumptions behind big data science take politics and historical contingencies away from people’s decisions and daily struggles, and assess patterns at face-value, reproducing the ideology of neoliberalism, which hides its political assumptions and tradition in order to present an individualistic and inward-looking self who no longer consumes only to satisfy their (supposed) needs, but in pursuit of a state of happiness and fulfilment. Neoliberalism can be briefly explained as the ‘disenchantment of politics by economics’,15 its success resting on the (neo)positivist knowledge which considers society as nothing more than the sum of its parts – what Gillespie calls ‘calculated publics’. The knowledge created through the use of new forms of technology by studies such as COSMOS’ is used to define and order society and its problems. Quantification and categorisation occur without the subjects even being aware.
Mills sees technologically advanced societies of the twentieth century characterised by public issues and private troubles. In The Sociological Imagination, he makes an apt distinction between ‘the personal troubles of milieu’ and ‘the public issues of social structure’.16 He goes further to say that when one is not aware of any values they may hold, and does not feel threatened, one becomes indifferent and apathetic. When, on the other hand, one is not aware of any values they may hold, but is aware of threats, one experiences uneasiness which can freeze and bring a person to despair, to a state of inaction, if the fear is total enough Mills described the time the society he was living in as a time of ‘private uneasiness and public indifference’.17 This depiction of ‘uneasiness’ as private trouble and ‘indifference’ as public issue still characterises contemporary Western societies, but more than before, private uneasiness has reached higher levels due to audit culture, austerity measures and their attack on community ties, and the technological permeation and organisation of society and private lives.
Under neoliberalism, ‘the problem of anxiety, considered on its own terms, is generally not recognised to be in need of any substantive sociological analysis’.18 The very portrayal of anxiety as a negative condition asks for solutions to ‘combat’ it. In this paper I discuss the existential and phenomenological experiences of anxiety as affect. Affect is always a movement which occurs in-between ‘micro’ and ‘macro’ spheres, throughout processes and relations within and between individuals, collectives and institutions. It belongs to the ‘bodily sphere’, and it involves ever-changing human and nonhuman bodies which can disrupt and mobilise action and create patterns.19 Through the understanding of anxiety as affect, we can discuss the inter-relationality between humans and matter. In this case, big data is a matter, a technology which has the capacity to be self-organised (based on algorithms and technological capacities) and informational.20 Technologies and affects ought to be mobilised outside pre-established, top-down frameworks of reference, in order to allow for alternative forms of praxis and the creation of new spaces for contestation and care outside institutional and technologically-mediated interactions. Seeing big data and anxiety as co-constitutive processes in continuous development and change allows for possibilities to devise strategic deviations. In other words, regarding anxiety as an experience triggered by external pressures and ideologies avoids the pathologisation of the human condition and the relativisation of ‘cultural effects’.
Big data as a technology of anxiety
Whilst technology and digitalisation save time and relieve individuals of certain responsibilities and chores, they also affect the notion of community. In a neoliberal, individualist environment, subjects are made to compete among themselves but also with technological machines as the individual is remotely supervised and to technological devices. Hardship, ‘failure’ to perform according to set matrices, and anxiety are interpreted as personal limitations which need overcoming, self-training and management, rather than being recognised as structural issues emanating from such developments as the de-skilling of jobs, as well as the preservation of profit by technology owners.
In the construction of the individual as a machine for capital production, the ‘quantified self’ category emerges, as the person who willingly uses technology to increase their wellbeing and (primarily) their productivity. The precursor of the quantified self is ‘the entrepreneur’, whose subjectivity is linked to classical economics, its main features being ‘creativity’ and ‘connectivity’.21 In the words of Krizner, member of the Austrian Economics School, ‘all individual action, when unfettered by the state is, by nature, entrepreneurial’.22 The assumed rationality of the entrepreneurial self is utility maximisation – a consequentialist and behavioural approach based on rational choice theory according to which behaviour is predictable and linear. Herbert Simon, contradicting the narrowness of this model, holds that the rationality which characterises human psychology is ‘bounded rationality’: decision-making is based on the amount of information at one’s disposal – which may be limited. Decisions can be influenced by unpredictable factors not necessarily related to utility maximisation. In other words, we are all ‘satisficers’ – always looking for ‘good enough’ solutions and shortcuts.23
Whilst the tools for mapping the entrepreneur’s behaviour used to be tabular datasets, the mapping of ‘bounded rationality’ is based on networks and correlations which can be ever-changing, and thus collectors need to engage in constant monitorisation. Companies can pretend to ‘inform’ individuals of a variety of ‘choices’, but in doing so, pseudo-choices, pseudo-problems, and abstract categorisations are created. Correlations are then analysed on the basis of behavioural and psychological patterns. I contend that the belief in the plausibility of this type of rationality underpins big data and marks the distinction between ‘the entrepreneur’ and ‘the quantified self’.
In 2007, the Quantified Self Movement emerged, which adopts a personalised, technology-driven approach to the social, political, economic, health, education and other aspects of society, its motto being ‘self-knowledge through numbers’. Its adherents claim they feel empowered by using technology, self-management and monitoring.24 ‘The quantified self’ does not have a unified definition – it is usually used to describe the happy consumer whose self-identity is based on their consumption and lifestyle choices, and the desire to be ‘successful’. However, quantification in current Western societies is not a desire which leads to more individual autonomy, but rather is an imposed top-down process which affects individuals and groups regardless of their social positionality. It is for this reason that I suggest the ‘quantified self’ to be a term whose definition should be broadened to encompass not just the conspicuous consumer, but the full complexity of social and political processes. For more clarity, therefore, the self-identifying ‘quantified self’ will be referred to in this paper as the ‘quantified consumer’.
An example of a self-managing technology used by quantified consumers is ‘Happify’, a mobile application marketed as a scientifically-approved tool to be used for one’s ‘natural’ self-interest to quantify and enhance their happiness. According to its developers, happiness is a choice, as one can be ‘trained to be happy’.25 As aesthetics and the promise of the script-based ‘good life’ benefit from an increasing preoccupation in daily lives, we can assume that the ‘Happify’ users are generally concerned with self-evaluation to fit a standardised ‘norm’, as a way of ‘passing’ for privileges.26 However, this readiness to measure one’s happiness through the use and sharing of data converges with dominant actors’ economic interests. At one end of the spectrum there is the company which profits from selling ‘Happify’, and at the other end there is the employer who profits from the emotional labour and discipline of the worker/user. In-between we find the complexity of social, political, historical and economic dynamics, systems, affects, laws and practices which are reinforced and legitimised, always decreasing the level of control of the quantified consumer’s life. In this context, individual autonomy is illusory: one has to adapt to an audit culture and strategise their way to ‘happiness’ and success by acting competitively and in isolation from others. This inward-looking, ‘self-fixing’ attitude discourages any attempts to challenge organisational practices. Here it is ‘rational’ to be selfish, to avoid ‘standing out’. Thus, affective work is constitutive of technological control.
Anxiety is not discussed in a society where greed and competition are promoted as ethical, empowering approaches to living. Contexts and space to discuss one’s feelings cannot be integrated in technocratic and neoliberal frameworks, in the same way as subjectivities and affects cannot be fully captured by big data. Although ‘the loneliness of leadership’ is lamented on business websites, the ideological underpinning of one’s desire to be a ‘leader’ (despite the anxiety risks), and the destructive effects of competition are ignored. One firm, [name], suggested that ‘advisors […] listen to your ideas, keep them in confidence, and feed back to you what they honestly think’ (my emphasis).27 However, these honest discussions would still occur within the entrepreneurial and productivist framework which sustains the belief that one’s self-esteem ought to rely on others’ envy.
Although the uni-linear economic rationality of neoliberal societies assumes that all wants have equal merit,28 the quantified consumer’s interaction with big data is qualitatively different from the experience of the jobseeker’s benefits claimant: the former is depicted as striver, the latter as skiver. The punitive aspect of quantification is revealed more overtly in the use of big data as part of the management of the performance, body and autonomy of the jobseeker’s benefits claimant in Britain. Three instances will be discussed here. Firstly, digital divide is maintained through an imbalance of access and skills for using technology. The Universal Credit Scheme (a means-tested benefit introduced in 2013 in Britain, replacing previous in- and out-of-work benefits with one payment), allocates benefits directly into bank accounts.29 Applications are online-only, and often applicants need to use public libraries for internet access, whilst libraries are increasingly being privatised.30 Not enough digital skills support is available as advocacy organisations’ funding had been cut drastically.31
Secondly, the monitorisation of the jobseeker and benefits claimant is a tool for financial sanctioning. In 2013 alone, 871,000 people were sanctioned by the Department for Work and Pensions (Gentleman 2014). Since March 2013, jobseekers and benefit claimants have been forced to register on the Universal Jobmatch website to find a job; if they spent less than 35 hours per week applying for jobs, they could face benefits sanctions.32 The scheme permanently and remotely monitors and forces claimants off benefits – £300 million of benefits are withheld in sanctions every year.33 Those made vulnerable are used as resources for data and capital extraction, whilst greater control by the government over public finances, against individuals, is established. Human labour is increasingly being ‘de-skilled’ through its replacement with time and cost-saving technology, the ideal twenty-first century worker is flexible, committed to a disciplined work ethic, and grateful for their work being waged, despite them being squeezed to comply with worsening conditions. Uncertainty, coupled with the ‘benefit scroungers’ discourse and the need for constant proof of moral genuineness when asking for support, have demoralising effects. Being the subject of social failure was an unbearable experience of anxiety which contributed to the death of a man whose benefits were stopped; shortly before his death, he stated that ‘even the sight of a CV would give me an anxiety attack’.34
Thirdly, one’s own bodily presence is monitored by the Job Centre: applicants are forced to check-in five days a week to avoid financial sanctions.35 According to a benefits claimant, ‘I have just been sanctioned for three weeks for missing an appointment. Although they say missing, I was actually there, but five minutes late’.36 Time, in this case, was used as an excuse for restricting somebody’s needed financial support, despite the person being actually there. The jobseeker claiming benefits constantly affects and is affected by mental, physical, temporal, spatial, bureaucratic, political, economic, technological and informational spheres and influences. As an individual’s worth is filtered through quantified performance data, a zero-sum situation is created: they use big data as a coping tool to avoid sanctions, whilst the same big data is used as a tool for sanctions.
The deeper (smaller) and wider (in larger quantities) data is, the better – so the story goes. Subjects are anxious not to reveal too much, nor too little. The fear of surveillance of missing opportunities to access the fullest meaning of (potentially) available data animates organisations to invest in data mining technologies:
If we take these twinned anxieties — those of the surveillers and the surveilled — and push them to their natural extension, we reach an epistemological end point: on one hand, the fear that there can never be enough data, and on the other, the fear that one is standing out in the data.37
Although companies can detect patterns through monitorisation, it is another thing to explain them. In effect, big data scientists try to control outcomes and policies (and populations’ behaviour), to later present their findings as true, pretending to have had access to the ‘inner’ subjectivity of their subjects. From all the examples above, it is clear that in the production of big data, the affect of anxiety plays a constitutive role, just as big data constitutes a factor in the creation of anxiety.
In line with Mills’s description of the workings of technologically developed societies, the problems created by big data science become pseudo-public issues. These public problems are based on abstract constructions of categories. When one fails to meet expectations and standards, the feelings of ‘lack’ and guilt are translated into private trouble, instead of structural failure caused by an elite. Private ownership of production, unequal distribution of technology, algorithmic manipulation, prejudices, and epistemological authority determine who and what counts in datasets, and which type of technology is to be invested in as part of strategies for capital accumulation. Within neoliberalism, the individualistic portrayal of the subject, austerity measures and the myth of the ‘good life’, affect both the vulnerable and the happy consumer. The subjects are coerced into making decisions which can be manipulated much more easily, creating patterns as reactions to punitive processes. The promise of big data to ‘predict’ behaviour becomes less a ‘prediction’ when populations are under constant monitoring which in recent times encompasses unwaged bodily labour, as the collection of personal information is capitalised.
Ioana Cerasella Chis is an MA student in Social and Political Theory at the University of Birmingham and an editor of The New Birmingham Review, a progressive student journal which publishes pre-doctorate work.
1 Stephen R. Bates, ‘The Emergent Body: Marxism, Critical Realism and the Corporeal in Contemporary Capitalist Society’, Global Society, 29.1 (2015), 128-47, p. 146.
2 Tom Boellstorff, ‘Making Big Data, in Theory’, First Monday, 18.10 (2013) <http://dx.doi.org/10.5210/fm.v18i10.4869> [accessed 31 October 2014].
3 danah boyd and Kate Crawford, ‘Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon’, Information, Communication & Society, 15.5 (2012), 662-79, p. 662.
4 Rob Kitchin, ‘Big Data, New Epistemologies and Paradigm Shifts’, Big Data & Society Journal, 1.1 (2014), 1-12, p. 2.
5 Edwin Black, ‘IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation’, The New York Times (October 2000); see <https://www.nytimes.com/books/first/b/black-ibm.html> [accessed 6 November 2014].
6 Harry van Dorenmalen, ‘Big Data is in Danger of Becoming Demonised’, EurActiv, 23 March 2014 <http://www.euractiv.com/sections/infosociety/big-data-danger-becoming-demonised-300521> [accessed 6 November 2014].
7 Kate Crawford, Mary L. Gray, Kate Miltner, ‘Critiquing Big Data: Politics, Ethics, Epistemology’, International Journal of Communication, 8 (2014) 1663-72 , p. 1664.
8 Tim Harford, ‘Big Data: Are We Making a Big Mistake?’, Financial Times, 28 March 2014 <http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html> [accessed 24 December 2014].
9 Crawford, Gray and Miltner, ‘Critiquing Big Data’, p. 1670.
10 Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013), p. 4.
11 Martin H. Frické, ‘Big Data and Its Epistemology’, Journal of the American Society for Information Science and Technology, 66.4 (2015).
12 Cecilia Rabess, ‘Can Big Data Be Racist?’, The Bold Italic, 31 March 2014 <http://www.thebolditalic.com/articles/4502-can-big-data-be-racist> [accessed 15 December 2014].
13 William Housley and others, ‘Big and Broad Social Data and the Sociological Imagination: A Collaborative Response’, Big Data & Society, 1.2 (2014), 1-15, p. 8.
14 Ian Hacking, ‘Making Up People’, in The Science Studies Reader, ed. Mario Biagioli (London: Routledge, 1999), p. 165.
15 William Davies, The Limits of Neoliberalism: Authority, Sovereignty and the Logic of Competition (London: SAGE Publications, 2014), p. 4.
16 C. Wright Mills, The Sociological Imagination (Oxford: Oxford University Press, 1959), p. 8.
17 Ibid., p. 11.
18 Iain Wilkinson, Anxiety in a ‘Risk’ Society: Health, Risk and Society (London: Routledge, 2001), p. 130.
19 Carolin Wiedemann, ‘Swarms, Memes and Affects: A New Materialist Approach to the Infrastructure of 4chan’, in Neighbourhood Technologies: Media and Mathematics of Dynamic Networks, ed. Tobias Harks and Sebastian Vehlken (Chicago and Zurich: Chicago University Press, forthcoming 2015), p. 4.
20 Patricia T. Clough, ‘The Affective Turn: Political Economy, Biomedia and Bodies’, in The Affect Theory Reader, ed. Melissa Gregg and Gregory J. Seigworth (London: Duke University Press, 2010), p. 210.
21 Simon Lilley and Geoff Lightfoot, ‘The Embodiment of Neoliberalism: Exploring the Roots and Limits of the Calculation of Arbitrage in the Entrepreneurial Function’, The Sociological Review, 62.1, 68-89, p. 71.
22 Ibid., p. 72.
23 Herbert A. Simon, Models of Bounded Rationality: Empirically Grounded Economic Reason, 3 vols (Cambridge, MA: MIT Press, 1982-97), iii, pp. 291-8.
24 ‘The Quantified Self: Counting Every Moment’, The Economist <http://www.economist.com/node/21548493> [accessed 15 December 2014].
25 Melanie Pinola, ‘Happify Trains You to be Happy, Using Games Backed by Science’, LifeHacker, 24 October 2013 <http://lifehacker.com/happify-trains-you-to-be-happier-using-science-based-g-1451531808> [accessed 25 December 2014].
26 Kitchin, ‘Big Data, New Epistemologies and Paradigm Shifts’, p. 8.
27 Columbia Consultancy, ‘Decision Making: The Loneliness of Leadership’ (2006) <http://www.columbiaconsult.com/pubs/v46_Feb06.html> [accessed 22 December 2014].
28 Rod Hill and Tony Myatt, The Economics Anti-Textbook: A Critical Thinker’s Guide to Microeconomics (London: Zed Books, 2010), p. 16.
29 TUC, Will Universal Credit Work? – Report (London: Trades Union Congress, 2013), p. 2.
30 Alison Flood, ‘Kirklees set to close 24 of its 26 libraries’, The Guardian, 30 July 2014 <http://www.theguardian.com/books/2014/jul/30/kirklees-close-24-26-libraries-cuts> [accessed 23.12.2014].
31 Amelia Gentleman, ‘Charities say millions without internet access will face benefits struggle’, The Guardian, 21 February 2013 <http://www.theguardian.com/society/2013/feb/21/universal-credit-online-benefit-claimants> [accessed 18 December 2014].
32 Citizens Advice Bureau, ‘Universal Credit – What is the Work Search Requirement?’, Citizens Advice Bureau <http://www.adviceguide.org.uk/england/benefits_e/benefits_welfare_benefits_reform_e/benefits_uc_universal_credit_new/benefits_uc_work_related_requirements/benefits_uc_what_are_the_work_related_requirements/benefits_uc_what_is_the_work_search_requirement/uc26_uc_what_is_the_work_search_requirement.htm> [accessed 18 December 2014].
33 HC Deb, 18 December 2014, vol 589, c1664; see <http://www.publications.parliament.uk/pa/cm201415/cmhansrd/cm141218/debtext/141218-0004.htm#141218-0004.htm_spnew3> [accessed 20 December 2014].
34 Guardian Readers and James Walsh, ‘“Even the sight of a CV would give me an anxiety attack”: Guardian readers on benefit sanctions’, The Guardian, 5 August 2014 <http://www.theguardian.com/society/2014/aug/05/even-the-sight-of-a-cv-would-give-me-an-anxiety-attack-guardian-readers-on-benefit-sanctions> [accessed 13 November 2014].
35 Maya Oppenheim, ‘Claimants will be forced to visit job centres for 35 hours a week or face sanctions’, New Statesman, 11 September 2014 <http://www.newstatesman.com/politics/2014/09/claimants-will-be-forced-visit-job-centres-35-hours-week-or-face-sanctions> [accessed 18 December 2014].
36 Guardian Readers and Walsh, ‘Even the Sight of a CV Would Give Me an Anxiety Attack’.
37 Kate Crawford, ‘The Anxieties of Big Data’, The New Inquiry, 30 May 2014 <http://thenewinquiry.com/essays/the-anxieties-of-big-data/> [accessed 26 December 2014].