Tuesday, 10 December 2019

Winter break

The journal Big Data and Society will be on winter break from December 21st to January 5th. Please accept any delays in processing and reviewing your submission, and in related correspondence during that time.

Happy Holidays!

Monday, 4 November 2019

Engaging with ethics in Internet of Things: Imaginaries in the social milieu of technology developers

Funda Ustek-Spilda, Alison Powell, Selena Nemorin
Big Data & Society 6(2), https://doi.org/10.1177/2053951719879468. First published Oct 3, 2019.
Keywords: Internet of Things, social milieu, ethics, virtue ethics, responsible technology

Discussions about ethics of Big Data often focus on the ethics of data processing – ‘generation, recording, curation, processing, dissemination, sharing and use’ of algorithms (including machine learning and artificial intelligence) as well as corresponding practices such as programming, hacking and coding (Floridi and Taddeo, 2016: 1). Data-based systems, however, do not come from nowhere. In this article, we attempt to shift the focus of ethical discussion from the context of data processing to the contexts of data production. We attend to the ethical qualities of the social milieu in which data-intensive technologies get to be produced and the practical reasoning people in this social milieu undertake in their day-to-day encounters with technology development.

Our analysis is based on our ongoing work as part of a research project titled VIRT-EU: Values and Ethics for Responsible Innovation in EUrope. As part of our research, we have conducted multi-site ethnographic fieldwork with developers, designers and entrepreneurs as part of the IoT startup ecosystem in Europe. Between 2016 and 2018, we followed industry meetups, hardware and software showcases, workshops and industry conferences and conducted in person interviews and held co-design workshops; amounting to more than 100 unique fieldwork visits. We also analysed 10 years of data from the records of the IoT meetups in Europe. In conducting our analysis, we sought answers to the following two questions: How do developers in start-ups and small companies practice ethical decision-making? What are the technological, business and social contexts that influence these decisions?

Our findings indicated that the social milieu of technology development, being strongly focused on innovation, attracting funding, corporate reputation and market share created challenges for explicit engagement with ethics. This, we argued, holds a major constraint to systemic change in the field. Many people considered ethics as important as a topic, but not urgent in their list of things to-do. From our analysis, we developed three action positions to illustrate points of engagement with ethical and moral concerns. These positions are of course not exhaustive of the positions available to those in these spaces, but include the most significant directions of engagements we observed in our fieldwork. These positions are the Disengaged; the Pragmatist and the Idealist. Within the Disengaged position, many IoT developers remained ambivalent about the 'use' of ethical reflection and discussion beyond compliance with existing regulations; concentrating their attention more on issues relating to business and financial stability. To illustrate, within the nearly 90 meetups held by IoT London Meetup Group held in the last ten years, our analysis indicated that ethics as a topic featured only once, while GDPR emerged as a topic that was mentioned often. The Pragmatist position places ethical concerns squarely in relation to business interests but is not necessarily subsumed by them. We found that ethics was referred to in its relation to new and emerging market opportunities and allowing businesses to limit financial liability. An Idealist position on the other hand, advocated action on values and principles by incorporating them directly into business ventures and social networks. A series of IoT manifestos advanced some of these perspectives (Fritsch et al., 2018) and some developers we interviewed also positioned themselves and the trajectories of their ventures along these lines. A strong identification with ‘we’ rather than ‘I’ and separation of individual and collective subjectivities in relation to ethical concerns as well as an active engagement with the responsibility for producing ethical technologies (and futures) were shared among these individuals.

Our analysis demonstrates that the extent to which individual subjectivity can influence engaging in ethical action may depend on the organisational environment technology developers are embedded in. This means that constraints (financial, structural, social or other) are not merely external things to be overcome for ethical action to take place, but rather intrinsic to the social milieu technology developers are part of. This goes some way to explain why on the one hand we are seeing a plethora of new ventures subscribing to emerging fields such as ‘technology for social good’ or ‘business with purpose’ whilst on the other hand technology products continue to violate privacy, intensify bias and entrench social power. Put simply, it is not simply that technology developers do not have ‘virtuous intentions’ but that the social milieu they are part of structures their space for action.


Floridi L and Taddeo M (2016) What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374(2083): 20160360.

Fritsch E, Shklovski I and Douglas-Jones R (2018) Calling for a revolution: An analysis of IoT manifestos. In: Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April 2018, p.302. New York: ACM Press.

Monday, 21 October 2019

Event: Data empires -The birth of sensory power

Engin Isin (QMUL) will be delivering a lecture based on a chapter co-written with Evelyn Ruppert (Editor, BD&S) in their recently published edited collection, Data Politics: Worlds, Subjects, Rights (Bigo, D, E. Isin, and E. Ruppert, eds. (2019)). Speaking in relation to a blog published on this site, he will suggest that since the 1980s, we are possibly experiencing the birth of sensory power. The lecture is sponsored by Goldsmiths Centre for Postcolonial Studies and will take place on 9 December 2019, 17:00 - 19:00, RHB 221, second floor, Rutherford Building, Goldsmiths University of London.  More information here.

Tuesday, 1 October 2019

Datafied knowledge production: practices, mechanisms and imaginaries at work in big data analyses

Special Theme Issue
Guest lead editors: Nanna Bonde Thylstrup*, Mikkel Flyverbom**, Rasmus Helles***

* Aarhus University
** Copenhagen Business School
*** University of Copenhagen

Digital transformations, such as datafication and algorithmic sorting, create new conditions for how we come to see, know, feel and act. This special issue explores the intersections between digital transformations and knowledge production by asking questions such as: what does datafied knowledge production look like? Which digital infrastructures support its future development? And what potentialities and limits do datafied forms of analysis and knowledge production contain? The responses we offer include the suggestion that while the resources, material features and analytical operations involved in datafied knowledge production may be different, many fundamental concerns about epistemology, ontology and methods remain relevant to understand what shapes it. By seeking to understand and explicate such assumptions, operations and consequences, the articles in this special issue sketch the contours of knowledge production in a digital and datafied world.

Editorial: Datafied Knowledge Production
Nanna Bonde Thylstrup, Mikkel Flyverbom, Rasmus Helles

Datastructuring—Organizing and curating digital traces into action
Mikkel Flyverbom and John Murray

Data out of place: data waste and the politics of data recycling
Nanna Bonde Thylstrup

Make data sing: The automation of storytelling
Kristin Veel

The optical unconscious of Big Data: Datafication of vision and care for unknown futures
Daniela Agostinho

Data in the smart city: How incongruent frames challenge the transition from ideal to practice
Anders Koed Madsen

Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media
Anja Bechmann and Geoffrey C Bowker

Thursday, 26 September 2019

Event: Data Rights - Subjects or Citizens?

On 11 November, the Mile End Institute (Queen Mary, University of London (QMUL)) is hosting an event that will discuss how the exponential accumulation of data from everyday online and offline activities raises tensions about who has the rights to produce and own such data. A panel will feature three speakers from a recently published book edited by Didier Bigo, Engin Isin, and Evelyn Ruppert (Editor, BD&S): Data Politics: Worlds, Subjects, Rights (2019). Engin Isin (QMUL) will chair the panel with Elspeth Guild (QMUL), Jennifer Gabrys (Cambridge; Co-editor, BD&S) and Didier Bigo (Sciences Po and KCL) speaking about their contributions. Click here for more information and to register.  The book is Open Access and a pdf copy can be downloaded here.

Monday, 9 September 2019

How should we analyze algorithmic normativities?

Special theme issue
Guest editors: Francis Lee* and Lotta Björklund Larsen**

* Uppsala University, Sweden
** TARC (Tax Administration Research Centre) at the University of Exeter Business School, U.K.

Algorithmic normativities shape our world. But how do we analyze them?
Algorithms are making an ever-increasing impact on our world. In the name of efficiency, objectivity, or sheer wonderment algorithms are increasingly intertwined with society and culture. It is more and more common to argue that algorithms automate inequality, that they are biased black boxes that reproduce racism, and that they control our money and information.1 Implicit, or sometimes very explicit, in many of these observations is that algorithms are meshed with different normativities and that these normativities come to shape our world. The special theme Algorithmic normativities contains a diversity of analyses and perspectives that deal with how algorithms and normativities are intertwined.

In the editorial, How should we theorize algorithms? Five ideal types in analyzing algorithmic normativities (and in an abridged version in this blogpost as well) we want to playfully draw on the metaphor of the engine hood to situate and discuss the articles contained in the special theme in relation to five analytical ideal types. We all recognize the trope of going under the hood to understand the nefarious politics of biased algorithms. But what other tropes of analysis can we identify in critical algorithm studies? And what are the benefits and drawbacks of these different analytical ideal types? In doing this, we categorize the articles in the special theme according to how the different contributions theorize, analyze, and understand algorithmic normativities. A goal is thus to continue exploring the meta-discussion about analytical strategies in critical studies of algorithms (cf. Kitchin 2014).

Under the hood: the politics inscribed in the algorithm
The first ideal type looks “under the hood” of the algorithm, which implies an analysis of the politics, effects, and normativities that are designed into algorithms. In this analytical ideal type, the logic of the algorithm appear like a deus ex machina impinging on society’s material politics (see for instance Winner’s seminal article on the politics of artefacts). This is reflected for instance in Christopher Miles’ article, where he illustrates how algorithms become intertwined with specific normativities in American farming. Miles shows that although new digital practices are introduced, existing socio-economic normativities are often preserved or even amplified and also seem to thwart other imagined futures. Algorithms here emerge as material laws of society that reshapes it according to different ideal types. But what other ways of approaching algorithmic normativities are productive in understanding the emerging algorithmic society?

Above the hood: algorithms in practices
On the other side of this spectrum of ideal types, working “above the hood” algorithms can be seen as “contingent upshot of practices, rather than [as] a bedrock reality” (Woolgar and Lezaun 2013, 326). In this ideal type, for instance Malte Ziewitz’ (2017) account of an “algorithmic walk” humorously points us toward the constant work of interpreting, deciding, and debating about algorithmic output. In this vein, Farzana Dudwhala and Lotta Björklund Larsen's contribution to the special theme proposes that people recalibrate algorithmic outputs. Sometimes people accept the algorithmic output, sometimes not, but always with the aim to achieve and establish a “normal” situation. Also Patricia DeVries and Willem Schinkel’s article undertakes an analysis of the practical politics of algorithms. Their paper addresses how three artists construct face masks to resist and critique facial recognition systems, thus they explore the practices and negotiations around algorithms of surveillance and control. In sum, this ideal type brings negotiations around the algorithm in focus, while the politics of “under the hood” recedes into the background, perhaps leading to the algorithm “itself” becoming obscured behind human action.

Hoods in relations: a relational perspective on algorithms
A possibility for a middle ground between “under the hood” and “above the hood” might be offered by an analytical ideal type that highlights the intertwining of algorithmic and human action (cf. Callon and Law 1997). This is the strategy in Francis Lee et al’s article. In it they criticize the current focus on fairness, bias, and oppression in algorithm studies as a step toward objectivism and realism. They instead propose to pay attention to operations of folding as a tool to highlight the constant intertwining of for instance practices, algorithms, data, or simulations. With a similar relational perspective, Elizabeth Reddy et al’s article highlights, through the algorithmic experiments of a comic book artist, how algorithms can be used to automate the work of assembling stories. Even though an algorithm might produce “the work”, ideas about authorship and accountability are still organized around human subjectivity and agency. The power of this perspective is that it allows a sensitivity to how agency is constituted in relation between humans and algorithms, and how action is a negotiation between both artefact and human. But it might also appear blind to the power-struggles and the oppression of weaker actors as well as apolitical ignoring effects that algorithms could have on the world (cf. Star 1991; Winner 1993; Galis and Lee 2014).

Lives around hoods: the social effects of algorithms
The fourth ideal type widens the lens and homes in on the social effects of algorithms. This analytical position takes an interest in infrastructures of classification and how they affect society and the lives of people. This type of analysis highlights how people’s lives become shaped by algorithmic systems.2 In this vein, Helene Gad Ratner and Evelyn Ruppert’s article analyzes the transformation of data for statistical purposes to show how metadata and data cleaning as aesthetic practices are in fact classification struggles with normative effects. Ratner and Ruppert thus highlight the effects of classification work in relation to homeless and student populations—how people are performed with infrastructures.

The mobile mechanics: reflexivity and the study of algorithms
Finally, we wish to highlight “the mobile mechanics”. By being attentive to how we as social scientists relate to algorithms as well as to those who work with them, our inherent normativities and presumptions come to the fore. In the special theme, David Moats and Nick Seaver’s article challenges our thoughts about how computer scientists understand the work of social scientists—and vice versa. Moats and Seaver document their attempt to arrange an experiment with computer scientists to test ingrained boundaries: how can the quantitative tools of computer science be used for critical social analysis? As it turns out, the authors were instead confronted with their own normative assumptions. Jeremy Grosman and Tyler Reigeluth’s article offers a similarly reflexive approach. In their contribution, the notion of normativity in algorithmic systems is discussed from various analytical position: technical, sociotechnical, and behavioral. The authors argue that algorithmic systems are inhabited by normative tensions between different kinds of normativities, and that a fruitful approach is to explore the tensions instead of the normativities themselves.

In conclusion
A point of departure for this special theme was that algorithms are intertwined with normativities at every step of their existence; in their construction, implementation, as well as their use in practice. The articles in this special theme thus scrutinize ideas of normativities in and around algorithms: how different normativities are enacted with algorithms, as well as how different normativities are handled when humans tinker with algorithms. The array of theoretical approaches—anxieties, pluralities, recalibrations, folds, aesthetics, accountability—that implicate algorithms force us to engage with the multiple normative orders that algorithms are entangled with. We wish you good reading and welcome comments and further discussions! Welcome to the special theme Algorithmic normativities.

Amoore L (2013) The Politics of Possibility: Risk and Security beyond Probability. Durham, NC: Duke University Press.

Beer D (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society 11: 985–1002.

Callon, M. and Law, J. (1997) ‘After the Individual in Society: Lessons on Collectivity from Science , Technology and Society’, Journal of Sociology, 22(2), pp. 165–182. doi: 10.2307/3341747.

Dourish, P. (2016) ‘Algorithms and their Others : Algorithmic Culture in Context’, Big Data & Society, July-Decem, pp. 1–11.

Galis V and Lee F (2014) A sociology of treason: The constructionof weakness. Science, Technology, & Human Values 39: 154–179.

Gillespie, T. (2014) ‘The Relevance of Algorithms’, Media Technologies, (Light 1999), pp. 167–194. doi: 10.7551/mitpress/9780262525374.003.0009.

Kitchin, R. (2014) ‘Thinking Critically About and Researching Algorithms’, SSRN Electronic Journal. doi: 10.2139/ssrn.2515786.

Neyland, D. (2016) ‘Bearing Account-able Witness to the Ethical Algorithmic System’, Science Technology and Human Values, 41(1), pp. 50–76. doi: 10.1177/0162243915598056.

Schüll, N. (2012) Addiction by design: Machine gambling in Las Vegas. Princeton, N.J.: Princeton University Press.

Seaver N (2017) Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society 4: 1–12.

Star SL (1991) Power, technologies and the phenomenology of conventions: On being allergic to onions. In: Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge, pp. 26–56.

Striphas T (2015) Algorithmic culture. European Journal of Cultural Studies 18: 395–412.

Totaro P and Ninno D (2014) The concept of algorithm as an interpretative key of modern rationality. Theory Culture & Society 31: 29–49.

Winner L (1993) Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology & Human Values 18: 362–378.

Woolgar S and Lezaun J (2013) The wrong bin bag: A turn to ontology in science and technology studies. Social Studies of Science 43: 321–340.

Ziewitz, M. (2017) A not quite random walk: Experimenting with the ethnomethods of the algorithm, Big Data & Society, 4(2), pp. 1-13. doi: 10.1177/2053951717738105.

1 See for instance, Amoore 2013; Beer 2009; Dourish 2016; Gillespie 2014; Kitchin 2014; Neyland 2016; Schüll 2012; Seaver 2017; Striphas 2015; Totaro and Ninno 2014; Ziewitz 2017. See also the critical algorithm studies list https://socialmediacollective.org/reading-lists/critical-algorithm-studies.
2 On torque see Bowker & Star (1999).

Sunday, 28 July 2019

The expansion of the health data ecosystem – Rethinking data ethics and governance

Special Theme Issue
Guest Editors: Tamar Sharon* and Federica Lucivero**

* Interdisciplinary Hub for Security, Privacy and Data Governance, Radboud University, NL
** Ethox and Wellcome Centre for Ethics and Humanities, University of Oxford, UK

As in other domains, digital data are taking on an ever more central role in health and medicine today. And as it has in other domains, datafication is contributing to a re-configuration of health and medicine, prompting its expansion to include new spaces, new practices, new techniques and new actors. Indeed, possibilities to quantify areas of life that have not traditionally been considered the remit of biomedicine – such as a person’s consumption patterns, her social media activity or her dietary habits – have contributed to a redefinition of almost any data as health-related data (Lucivero and Prainsack, 2015; Weber et al., 2014). Increasingly, these data are being generated outside the traditional spaces of medicine, as people go about their daily lives interacting with consumer mobile devices. Similarly, the technological tools needed to capture, store, analyze and manage the flow of these data, from wearables and smart phones to cloud platforms and machine learning, increasingly rely on infrastructure and know-how that lie beyond the scope of traditional medical systems and scientists, amongst data scientists and ICT specialists. Moreover, new stakeholders are cropping up in these quasi-medical yet still undomesticated territories. On one end of the spectrum, individuals who generate health data as they track and monitor their health are both solicited as research participants and are making demands on researchers to utilize their personal health data (HDE, 2014). On the other end of the spectrum, consumer technology corporations such as Apple and Google are reinventing themselves as obligatory passage points for data-intensive precision medicine (Sharon, 2016). And somewhere in between, not-for-profit organizations, such as Sage Bionetworks and OpenHumans.org, are positioning themselves as mediators in this ecosystem in formation, between the medical research community, individual and collective generators of data, and technology developers.

As proponents uphold, this expansion and decentralization of the health data ecosystem is promising: it may advance data-driven research and healthcare, and it may render research more inclusive (Shen, 2015; Topol, 2015). But, as critical scholars of science and technology have consistently shown, a fuller grasp of our technological present must always include the far-reaching, unexpected and sometimes deleterious social, political and cultural effects of discourses of scientific progress and technologically-enabled democratization and participation. In recent years, such critical scholarship has been particularly wary of the new power asymmetries that datafication contributes to. Rather than levelling power relations, critics observe, these are being redrawn along new digital divides based on data ownership or access, control over digital infrastructures, and new types of computational expertise, where those who generate data, especially citizens, patients, and consumers, are positioned on the losing side of the on-going extraction and scramble for the world’s data driven by state and corporate actors (Andrejevic, 2014; boyd and Crawford, 2012; Taylor, 2017; Zuboff, 2016).

In the context of the data economy, the dominant response to these growing power differentials has been to ensure that individual data subjects acquire more control over the data they produce – what Prainsack calls the “Individual Control” approach in her contribution to this special theme. Examples include the EU’s General Data Protection Regulation or initiatives that allow individuals to monetize their personal data (Lanier, 2013; www.commodify.us). In the context of data-driven medicine, this emphasis on increasing individual control over data has translated into attempts to develop better anonymization techniques and more fine-grained informed consent (Kaye et al., 2015), as well as the configuration of patients as the rightful “owners” of their own medical data (Kish and Topol, 2015).

However, scholars from different disciplines have begun questioning whether enhancing individual control over data is the most effective or desirable means of addressing the new power differentials of digital society. While some scholars emphasize the relational and social nature of persons and data (Taylor, 2012), others question the legal feasibility of individual ownership of data (Evans, 2016), and others highlight the futility of monetization schemes as a means of redressing inequalities (Casilli, 2019). Most importantly, the emphasis on individual rights and values may result in a reframing of societal concerns as individual ones all the while undermining the political power of collectives.

Each of the contributions that make up this special theme addresses the reconfiguration of existing relationships and the emergence of new power differentials that result from the expansion of the health data ecosystem. While they do this from different disciplinary perspectives, they all share the same starting point: the understanding that increased individual control of data subjects is insufficient for anticipating the far-reaching risks and preventing the societal, if not individual, harms associated with this expansion. In light of this, they argue for new governance frameworks, technological infrastructures and narratives that are predicated on the shared responsibility of multiple stakeholders and collective decision-making and control.

The commentaries by Brian Bot, Lara Mangravite and John Wilbanks, and by Bart Jacobs and Jean Popma both discuss the types of technical methods and arrangements that need to be developed to enable secure, responsible and equitable data sharing in the context of decentralized medical research. Both groups of authors are involved in the design and implementation of novel data management infrastructures.

A better understanding of the workings of data management infrastructures is discussed in a filmed interview with José van Dijck, on the recent book she has co-authored with Thomas Poell and Martijn de Waal, Platform Society: Public Values in a Connective World (2018). Van Dijck and Sharon discuss the importance of grasping how the material functioning of internet platforms contributes to shaping a new political and social reality.

In their commentary, Alessandro Blasimme, Effy Vayena and Ine Van Hoyweghen scrutinize how the proliferation of citizen generation of medical data, in initiatives like the American “All of Us” program, is unsettling the position of a less commonly studied stakeholder: the private insurance sector. Such initiatives, they argue, create a new “information asymmetry” between private insurers and those of their policy-holders who enroll in such research, which will likely make people more reluctant to donate personal health data for precision medicine research.

Tuukka Lehtiniemi and Minna Ruckenstein focus on data activism as a means of challenging the power asymmetries of datafied societies. Based on their engagement as social scientists with MyData, a data activism initiative originating in Finland, they identify and disentangle two parallel social imaginaries, a “technological” and a “socio-cultural imaginary”. They discuss the benefits and disadvantages of each and call for a greater role for the latter, while acknowledging its weaknesses.

The contributions by Barbara Prainsack and Linnet Taylor & Nadezdha Purtova both address the limitations of the framework of the commons –  today’s preferred site of theoretical and practical resistance for those scholars and activists seeking to counter digital power asymmetries by foregrounding collective, rather than individual, control over data. While Prainsack argues that a more systematic discussion of processes of inclusion and exclusion in commons is required, Taylor and Purtova call for more attention to which stakeholders are affected by data practices. Both agree that in light of the multiple nature of data, the original commons framework cannot be easily transposed from physical to data commons.

In her article, Tamar Sharon calls for a closer examination of the different conceptualizations of the common good that are at work in one specific area of the expanding health data ecosystem, what she calls the “Googlization of health research”, or the recent entrance of large consumer tech corporations into the medical domain. Using the framework of justification analysis (Boltanski and Thévenot, 2006), she identifies a plurality of conceptualizations of the common good that different actors mobilize to justify collaborating within these new multi-stakeholder research projects.

We hope that this special theme offers a productive – albeit far from comprehensive – overview of arguments for and examples of infrastructure, governance and ethics that are collective-centric in addressing the challenges posed by the datafication and expansion of the health ecosystem.


Andrejevic M (2014) The Big Data Divide. International Journal of Communication 8: 1673–89.

Boltanski L, and Thévenot, L (2006) On Justification: Economies of Worth. Princeton: Princeton University Press.

boyd d and Crawford K (2012) Critical Questions for Big Data. Information Communication & Society 15(5): 662–79.

Casilli A (2019) En Attendant les Robots: Enquête sur le Travail du Clic. Paris: Seuil.

Evans B (2016) Barbarians at the Gate: Consumer-Driven Health Data Commons and the Transformation of Citizen Science. American Journal of Law & Medicine (4): 1–34.

Health Data Exploration Project (HDE) (2014) Personal Data for the Public Good: New Opportunities to Enrich Understanding of Individual and Population Health. Calit2, UC Irvine and UC San Diego. Available at: at http://hdexplore.calit2.net/wp-content/uploads/2015/08/hdx_final_report_small.pdf

Kaye J, Whitley E, Lund D, Morrison M, Teare H and Melham K (2015) Dynamic consent: a patient interface for twenty-first century research networks. European Journal of Human Genetics 23(2): 141-146.

Kish L and Topol E (2015) Unpatients – why patients should own their medical data. Nature Biotechnology 33(9): 921-924.

Lanier J (2013) Who Owns the Future? London: Penguin Books.
Lucivero F and Prainsack B (2015) The lifestylisation of healthcare? “Consumer genomics” and mobile health as technologies for healthy lifestyle’. Applied and Translational Genomics, 4. http://dx.doi.org/10.1016/j.atg.2015.02.001.

Sharon T (2016) The Googlization of health research: from disruptive innovation to disruptive ethics. Personalized Medicine. DOI: 10.2217/pme-2016-0057.

Shen H (2015) Smartphones set to boost large-scale health studies. Nature. DOI:10.1038/nature.2015.17083.

Taylor L (2017) What is data justice? Big Data & Society 4(2): 1-14.

Taylor M (2012) Genetic Data and the Law: A Critical Perspective on Privacy Protection. Cambridge: Cambridge University Press.

Topol E (2015) The Patient Will See You Now: The Future of Medicine Is in Your Hands. New York: Basic Books.

van Dijck J, Poell T and de Waal M (2018) The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press.

Weber GM, Mandl KD and Kohane IS (2014) Finding the missing link for big biomedical data. JAMA 311(24):2479–2480. doi:10.1001/jama.2014.4228

Zuboff S (2015) Big other: surveillance capitalism and the prospects of an information civilization. Journal of Information Technology 30(1): 75–89.