data: the new “raw material”

you can’t get lost

In the house where I grew up, the forest reached right up to our back door. Many summer mornings I would run out and get lost in the forest with my friends. We would play for hours, roaming around in a few square kilometers of woodland until we got hungry. Then we would go home to devour mac and cheese and head back to the forest until it was time for dinner. Our parents had a general idea of where we were, but they didn’t even care to know exactly where. No one was tracking us. No one could locate us. No one could find us. There were no adults anywhere. Only kids and animals. Our parents knew that once we got hungry, we would come home.
This was the typical situation for kids of my generation. Suburban kids rode their bikes; city kids gathered at playgrounds and underpasses. Today, every child has a mobile phone, including my thirteen-year-old son. When today’s kids leave the house, they remain in constant contact with parents and friends via phone and text messages. They emit GPS signals. They leave digital footprints on social media. They are little beacons producing and consuming data. If any of my three children went off the grid, like we used to, my wife and I would be close to going crazy with worry that something bad had happened.
We have adapted to a reality where we can find everyone at any time… and we expect and demand to be connected at all times.

The first time a child picks up a mobile phone or plays their first video game, they begin to build a stack of personal data that will grow throughout their lifetime, a stack that can constantly be cross-referenced, correlated, encoded, and sold. When I was in college, some twenty years ago, I sent or received not a single email or written message. I posted nothing on social media. I didn’t own a mobile phone. Yet I am now thoroughly profiled and commodified, like most Americans. Private companies collect and sell up to 75,000 individual data points on the average American consumer. And this number is microscopic compared to what follows.

The explosion of data production is a very recent phenomenon, and since it began, the ability to store data has been increasing at an exponential rate. For millennia, keeping records meant clay tablets, or scrolls made of papyrus or parchment crafted from animal skins. The first modern paper, made from wood or grass pulp, constituted a significant advancement; however, the first milestone regarding mass data production was the invention of the printing press. In the first 50 years after the appearance of the first printing press, 8 million books were printed – more than all the books that European writers had produced in the previous millennium.

Thanks to successive inventions such as the telegraph, telephone, radio, television, and computers, the global volume of data increased dramatically during the 20th century. By 1996, there was so much data and computing had become so inexpensive that digital storage had, for the first time in history, become a more economical solution than paper-based systems.
Even by 2000, only 25% of data was stored in digital form. Before a decade had passed, by 2007, this percentage had skyrocketed to 94%. And it has continued to rise ever since.
Digitalization has significantly enhanced data collection capabilities. Ninety percent of global digital data has been generated over the past two years. Every year, the volume of digital data increases by 50%. Every passing minute, 204 million emails are sent, 2.4 million pieces of content are posted on Facebook, 72 hours of video are uploaded to YouTube, and 216,000 new photos are posted on Instagram. Industrial companies are incorporating sensors into their products to better manage supply chains and the handling and movement of goods. All of this culminated in the creation of 5.6 zettabytes in 2015. A zettabyte equals 1 sextillion (10^21) bytes, or 1 billion gigabytes.

The term big data is a general term that describes how these large amounts of data can now be used for understanding, analyzing, and forecasting trends in real time. The term can be used interchangeably with the terms big data analysis, analytics, or deep analytics.
A common misconception is that the progress achieved thanks to big data is simply a function of the quantity of data collected. In reality, this increase in the amount of data itself is useless if there is no capability to process it.

Thus begins one of the chapters of Alec Ross’s book, “The Industries of the Future,” one of the contemporary “venturers” in, let’s call it, techno-scientific philology / futurology 1. References to the past (here to the author’s childhood), to an “age of innocence,” may intentionally or unintentionally evoke a wave of nostalgia: it is the best way for the reader to reconcile with the perhaps harsh but inevitable datafication of everything.

At the same time, however (the above excerpt) is a hint about the paradigm shift 2 that is already happening and can be easily “mapped” in everyday life: you cannot get “lost” in the digitally/electronically interconnected social field since you are now a small, insignificant permanent data generator. You don’t even want to “hide”! And if it happens (due to some malfunction or accident) for these thin, invisible electromagnetic fibers that “connect” you to be interrupted; then you truly “get lost” with a new variation of the pain-of-loss: you “get lost” even inside your own home…
For the sake of analysis, we can summarize somewhat schematically but not arbitrarily.

There is “world A”, where the following happens:
a) You don’t “get lost”, either in “physical space/time” or in the flow of your thoughts, because you have learned (and this happened gradually) to recognize “landmarks”. Depending on whether it’s a suburban forest, a completely unknown mountain, the city center, a suburb, an open sea, a coastal sea route, or your own reasoning, these landmarks can take various forms. The position of stars and constellations at night; the position of the sun and the north/south/east/west system; trees, buildings, rocks, streams, monuments, elevated positions (or objects), signs; even the urban street numbering of buildings… But also “reference points” in a train of thought, fixed ideas that correspond to strong beliefs, feelings and their stimuli…
b) If you do “get lost” you can (depending on your level of composure…) try to “find yourself again” by using the above or similar methods.
c) If you want to “get lost”, if you want in other words to remain unfound (by whoever might be looking…) you can do so at will. You can, that is, hide in the literal or metaphorical sense: you can find a physical hiding place, or you can keep moving constantly, or you can keep your thoughts to yourself; keep them for yourself, coherent, repressed but personal…

There is also “world B”:
a) You don’t “get lost” because you are what Ross refers to (his analogy is successful): a permanent beacon. You emit signals anywhere, oriented or not. You don’t need to have knowledge or awareness of any “reference point”: these are embedded in the handheld device, and it is its own function that indicates coordinates, directions, routes, and paths to you. Mentally, you don’t “get lost” to the extent that you constantly externalize (: digitize) not just reasoning but also any momentary intellectual or emotional “flash.” In this new model, as long as the machines (and your beacon in relation to them) work correctly, you “can’t get lost.” (To be precise: you are lost definitively by the “old criteria,” of “world A” – but more on this below).
c) You don’t even want to “get lost,” most likely. Primarily because “getting lost” means immediate loss of identity: you have formed a “beacon identity” and a human beacon that stops transmitting is a human beacon destroyed. Dead…

These two “worlds” are completely different from each other. One cannot claim that “world B” is simply the evolution of “world A”, although it is possible to find genealogical relationships between the “new” and the “old”. However, “world B” considers it unthinkable that “world A” ever existed or could exist, except as a primitive beginning. “World A” considers “world B” a tremendous regression. These are worlds that are incompatible with each other (and will become even more so as “world B” develops).

It is therefore easily demonstrated, through simple observation (or not so simple…), that a literal paradigm shift is evolving in the social field as the bioinformatic, digital model “develops” – a “paradigm change” across almost the entire spectrum of social relations:
– Social concepts, meanings, and representations of the general notions of “space” and “time” are changing, have already changed. They change through the radical transformation of the concept, meaning, representation, and memory of what “space” and “time” mean for each individual – whether conceived as “physical” states or as social arrangements. For example, while someone is “somewhere” in the sense of “world A,” being interconnected, it is very likely that they are “anywhere else” outside this “somewhere”; or that this “somewhere” exists only as its digital representation (through some selfie, for instance, that will circulate anywhere).
This is tangible in everyday life. Verbs like “upload,” “download,” “load,” which once had spatial and bodily meanings, have radically changed in significance. Space is once again transformed into time – a constant capitalist process…
– What we might call social orientation skills, whether in the physical or intellectual sense, are changing, have changed. From how one gradually begins to learn about oneself within “world A,” to the claims (claims of control, literal or symbolic ownership, “contact,” “recognition”) that each person makes within and upon “world B.” It would be excessive to observe that as generalized informational, digital, data mediation cancels and subordinates all previous (in the historical sense) ways – and thus also the obstacles, difficulties, setbacks, doubts, even the body with its limits (which necessarily shaped the means and ways of “world A” as we described it earlier) – it would therefore be excessive to claim that these demands become far more aggressive. Because they don’t know what “limits” mean?3 Would it be excessive to argue that these claims become terrifying yet also easily frightened (from a subjective perspective) because they don’t know and have no measure of what restraint, inability, rejection, or disappointment mean? Would it be excessive, finally, to argue that the easily accessible and approachable “giga-world B” silently and underground constructs individual “micro-worlds B,” as a kind of unknown (subjectively) wandering – “micro-worlds B” that constantly try to find their central place in the “giga-world B,” systematically failing, with all the emotional and psychological implications this entails?

Someone might argue that the above claims are simply due to our belonging (generally, though not absolutely…) to “world A”. That is, we are speaking from a historical position from which we can do nothing but see only the discounts of “world B” – and not its new possibilities. This is true up to a point; such is our position!
But something even more dangerous is happening. Even if the above claims are proven correct, they are partial. Because they are, to a greater or lesser extent, ontological. Even if we do not explicitly state this, they have at their center an abstract “human being”, as shaped in “world A”. As if this “being” has always inhabited the planet (and potentially others in the future), rather than historically specific human subjects who transform both the “world” and themselves in specific ways; usually proceeding blindly.
In other words: no matter how correct, interesting, or simply debatable the conclusions drawn from an ontological approach to the undeniable paradigm shift may be, they lack something very basic: capitalism and power structures! Without analyzing capitalist and power-related transformations, any ontological approach, no matter how useful it may be in its observations and conclusions, eventually leads to the pseudo-philosophical and misleading question “what is man?”. And there is no answer to that; or there are so many answers that they end up in self-referential obsessions…

you can’t hide (from the chapter…)

The dipole that constitutes the quintessential capitalist process is the one included in the final sentence of the aforementioned excerpt from Ross: …In reality, this very increase in the quantity of data is useless if there is no ability to process it. Data, lots of data, infinite data; and their algorithmic processing: these did not fall like apples—from—the—tree. Nor did they constitute the declared “holy grail” of the philosophical or scientific intellectual endeavor of our kind, in all its variations, before the 20th century. And, of course, the 20th century cannot be conceived in any other way than as capitalist. Here is another excerpt from Ross’s book, another sample of the paradigm shift, where ontology goes for a walk:

The greatest hope for feeding an ever-growing global population comes from the combination of big data and agriculture – precision agriculture. For thousands of years, farmers operated based on a combination of experience and instinct. For the most part of human history, the phases of the moon were considered the most important scientific input in agriculture (due to ancient beliefs about the moon’s influence on soil and seed, and the more practical issue of time management without clocks or calendars). After World War II followed a period of scientific and technological innovation that sparked the so-called Green Revolution, which brought a tremendous increase in agricultural production and reduced both hunger and poverty. The Green Revolution introduced new technologies and practices regarding hybrid seeds, irrigation, agricultural chemicals, and fertilizers. Even today, however, farmers usually work based on a fixed schedule for planting, fertilizing, pruning, and harvesting, without giving much importance to changes in weather and climate conditions or the small variations occurring in each field; agriculture remains an extension of the industrial era.

What precision agriculture promises is that it will collect and evaluate a plethora of real-time data regarding factors such as weather, water and nitrogen levels, air quality and diseases – factors that do not concern specifically each farm or acre, but concern specifically each square centimeter of agricultural land. Sensors will be deployed in the field that feed the cloud [: cloud] with dozens of forms of data. This data will be combined with data from the GPS system and meteorological models. After collecting and evaluating this information, various algorithms will be able to formulate an accurate set of instructions for what the farmer should do, when and where.

The tractor or combine harvester I used to climb on as a child was a simple and sturdy machine: a steel frame, large tires, an engine, and nothing more. The farmer worked the field based on the day and time, estimating with the eye the piece of farmland that lay ahead. The agricultural machines built for tomorrow resemble cockpit aircraft more than the tractors I remember from my childhood. There are graphical interface screens for software programs running on a tablet-type computer within the farmer’s line of sight. The machine does not move based on where the farmer directs it, but based on instructions given by the software, which remotely controls it.
As the machine works in the field, active sensors placed next to the headlights feed the system information about the crop yield. As it traverses the field autonomously, the machine continuously absorbs and utilizes information—from satellites high in the sky and from the ground beneath it. Instincts now rely on algorithms. The machine operates with a degree of precision that farmers at no other moment in human history could have ever dreamed of.

Today’s versions are mere glimpses of what is possible. Eventually, this tractor will be able to sense what each square centimeter of soil needs and send microscopic amounts of specially formulated fertilizer mixtures, depending on what that one square centimeter requires. Instead of covering a field with a fixed amount of phosphorus or nitrogen, it will analyze the exact amount needed at the precise level.
The first investments to establish precision agriculture on a global scale are already being made by the largest companies in the agricultural sector, including Monsanto, DuPont, and John Deere. Monsanto was convinced early on that analyzing large data sets is highly significant and has embarked on a buying spree, paying billions of dollars to acquire companies specializing in agricultural data analysis. The company estimates that data analysis can increase agricultural production by 30%, with an economic impact of around 20 billion dollars.

Thanks to innovations… farmers will increasingly resemble office workers in terms of appearance and work methods. They will spend more of their day engaged in tasks such as completing data and updating software, and less with their hands in the soil.
In 2014, Monsanto’s Chief Technology Officer, Rob Fraley, stated: “We see ourselves comfortably transforming into a software company within the next five to ten years.”

Let no one rush to consider the above as fantastic prophecies! They are already happening. There is no serious greenhouse flower cultivation operation (we assume the same applies to other plant products) here in technologically backward Greece that does not have various types of sensor systems installed internally: thermometers (and corresponding systems for lowering the temperature through artificial waterfalls or raising it with professional air conditioning); hygrometers for measuring humidity in the air and soil; “smart” automatic irrigation and fertilization drip tapes (fertilizers and medicines are dissolved in the water) with holes per root and remotely controlled flow so that the drops are exactly what each root needs; water quality and quantity sensors in the tanks; “smart” remotely controlled ventilation openings: all these controlled, represented, and guided by a “central control system,” that is, a computer. With appropriate software, touch screen, etc. etc. Although the old specialties that do the “dirty work” (such as fruit harvesting or detecting diseases or parasites in individual plants) have not been replaced by robots (and perhaps will not be replaced as long as migrants/immigrant workers who are miserably paid continue to do these jobs…), new specialties that would have been unthinkable just a few years ago have been added to the organization of this type of agricultural production: electricians, plumbers, programmers, sellers and/or repairers of robotic systems… Greenhouses, as relatively controlled environments, were easier to bring agricultural production into the digital age compared to open fields, where the environment is far more complex. But this is no reason for capitalist enterprises to be discouraged! Quite the opposite. It is what is called a challenge…

However, our topic is not agricultural production and precision farming. It is different: in agricultural production, where there are no issues of social ethics, ideology, and, above all, ontology of “what is man,” companies easily come forward as what they are: processes of capitalist accumulation, profitability. Would it occur to you, however, that just as every tomato root or every orange tree (would) be controlled remotely through all kinds of sensors, that is, the natural background of their specific datafication, and through the flow of data and their appropriate algorithmic processing, maximum cultivation yield would be achieved, so exactly the voluntary self-datafication 4 of modern citizens, through “individual machines” and the “new sociality,” pushes our species toward the conditions of its transformation into cultivated crops? Does it occur to anyone that self-datafication appears to have “personal benefits” only because what is (and what is not) a social norm is being restructured; and that, moreover, what it does is offer control (and exploitation) to an historically unprecedented depth and extent, so as to significantly reduce the distance between “human subjectivity” and the organized cultivation of plants or/and animals?

the controlled cultivation of the social

Thanks to the generalized and voluntary datafication of social relations, customs, behaviors, practices, for the first time not only in capitalist but also in human history, anything that has been considered a private matter is handed over to recording and accumulation. For the first time in human history, what is called relationship acquires a specific type of materiality, “invisible” in terms of human senses, but very tangible by appropriate machines, storable and, above all, processable. Beyond any other form of raw material, capitalism managed to create, spread and establish the creation of a social raw material of uniform type, uniform encoding, uniform management: bytes.

Three at least possibilities have already emerged that open up for capitalism through this mechanization/datafication of the social:
– The continuous construction of consent, obedience and disorientation through mechanical means, which however appear “human”, in order to be more convincing. Social media are the so far basic field of exercise of this capability by various bosses. We assume that in the future they will develop even further.
– The lifelong filing and self-filing of citizens, each one separately.
– The bio-political management of populations en masse: in terms of herd and/or greenhouse.

Even a flat defender of the information technology / robotics “industries of the future” like Alec Ross cannot but observe that what we call full and real subsumption of society to capital will have certain “undesirable side effects” for citizens, suggesting also where one should “look”:

… My friend and former colleague at the ministry of foreign affairs, Jared Cohen, is today the director of Google Ideas, a research organization founded by Google in 2010. He recently became a parent for the first time and is particularly concerned about protecting children’s personal data in an age of data permanence. “This is what worries parents most,” he says. “Whether you live in Saudi Arabia or the United States, children go online at a younger age and earlier than at any other time in recorded history. They say and do things online that far exceed their physical maturity. If a nine-year-old starts saying stupid things online, those stupid things will be preserved for his entire life thanks to the permanence of data.


The risks that lurk are not always visible, which makes the effort to address the permanence of data even more complicated. Let’s take for example the Good2Go mobile app, which is launched as a “consent app.” On the app’s main website we see a young man and woman standing in the shadows. They are looking at the phone that the young man is holding. The text reads: “When a girl meets a boy and love strikes like lightning, the question must be answered: Are we Good2Go [consenting to proceed]? The educational Consent App for sexual consent.”
The idea on which the app is based – to encourage men and women to obtain “positive consent” before sex – is commendable. Here is the problem, however: the app records users’ names and phone numbers, as well as their sobriety level and the exact time of “consent.” This creates a permanent record regarding who you have sex with, when, and whether you were sober, in the mood, or drunk.
Does Good2Go have the legal right to sell this information to advertisers? Yes. The app’s privacy policy does not appear on its website, but if you find it you will read that “the company may not be able to control how your personal information is handled, transferred or used,” even if the app itself disappears.

It’s not just old emails or your love life that can resurface and cause you problems. It could also be the math class you failed, the fight you got into at school, or your inability to make friends as a child.

Whether we want to respect a stricter version of privacy protection or not, it is most likely no longer possible to turn back and truly find this concept again. Margaret Selzer, a computer science professor at Harvard University, argued at the 2015 World Economic Forum in Davos that “Privacy protection as we knew it in the past is no longer feasible. […] Our commonly accepted notion of privacy protection has died.”
Due to the proliferation of sensors, devices, and networks that absorb data from everywhere, we have probably passed the point of no return where any meaningful halt to data collection is possible. Instead, perhaps we should focus our attention on preserving and properly using them, that is, to clearly define for how long data can be retained and regulate how they can be used, whether they can be sold, and what kind of consent is required from the person providing them.

Ross presents as the only remaining option the invocation of the “responsibility” and “integrity” of the capitalist world, through certain institutions, so that bosses won’t abuse our “personal data.” However, such a thing, even if politically desirable, is now unfeasible. This is proven by the surveillance techniques (and big data collection) of state security services: they operate even in (formal) illegality, since the process of recording, accumulating, and processing data can be done anywhere on the planet, by companies and police forces; far from the risks of their localization.
Besides, in the course of the mechanization/datafication of the social, businesses and states have already found themselves facing the protective ramparts of “personal data” that existing (and in some countries, large) organizations of this kind attempted to establish… They overwhelmed them either due to the massive voluntary provision of this data, or because of the massive inexcusable ignorance of what happens with every use of a networked computing machine, or due to technical advances in the capabilities of algorithmic processing of this raw material.

In other times, when mass recordings of “personal data” were made on paper and when their archives were rooms with cabinets and shelves of folders, in various uprisings or even revolutions, the registry offices and, mainly, the tax offices became targets of arson. To a greater or lesser extent, citizens consciously understood that the use of their “personal data” was being used against them, even if the state smiled paternally and “responsibly”.
In the generalized transformation of every personal/social moment of ours into an electronic “datum”, which is transmitted anywhere and, some of it, ends up on hard drives of companies and state services to constitute the raw material of any processing / reprocessing, it is bordering on the impossible for these mechanisms to be endangered by targeted, liberating destruction – at some point in the unknown future.
The conscious distancing from self-dato-pification, by maintaining maximum distances from any not extremely necessary use of new information machines, and the radical revision of social customs and traditions already based on “networking”, is the only thing that seems tangibly feasible at this historical moment. Before something like this becomes a “crime”!…

However, for this to happen, it requires awareness of what is happening in the 3rd and 4th industrial revolution. It’s not difficult work; however, it is urgent!

Ziggy Stardust
cyborg #11 – 02/2018

  1. The book was published in English in 2016, and in Greek (published by Ikaros, translated by Nikos Roussos) in June 2017. On the “ear” of the cover, the following is mentioned:
    Alec Ross (Charleston, West Virginia, 1971) is considered one of the world’s leading experts on innovation. He served as Senior Innovation Advisor to the U.S. Department of State and collaborated with Hillary Clinton during her tenure.
    In 2013, he left the Department of State and joined the School of International and Public Affairs at Columbia University as a senior fellow.
    The book The Industries of the Future topped the New York Times Best Sellers list, was translated into 18 languages, and won the award for best book of the year from the Tribeca Disruptive Innovation Awards in 2016.
    ↩︎
  2. We keep it untranslated so that there are no misconceptions, as referred to in the previous text, about changing paradigm… again. ↩︎
  3. The border became an abstract concept only in mathematics. In other respects it was a concept extremely spatial (and in some cases spatio/temporal). It has happened in other, earlier civilizations, to create deities of borders; such as, for example, Artemis. But the cyborg is not the appropriate opportunity for this interesting topic. ↩︎
  4. More at fitter, happier, more productive…  (about self-quantification), by Shelley Dee, in cyborg 2 (February 2015). ↩︎