My God! What have I done?…

And you may find yourself / Living in a shotgun shack / And you may find yourself / In another part of the world / And you may find yourself / Behind the wheel of a large automobile / And you may find yourself in a beautiful house / With a beautiful wife / And you may ask yourself, well / How did I get here? / … / And you may ask yourself / How do I work this? / And you may ask yourself / Where is that large automobile? / And you may tell yourself / This is not my beautiful house! / And you may tell yourself / This is not my beautiful wife! / … / What is that beautiful house? / And you may ask yourself / Where does that highway go to? / And you may ask yourself / Am I right? Am I wrong? / And you may say yourself, “My God! What have I done?” / …

Brian Eno and David Byrne, with Talking Heads, on the (incredible!) song Once in a Lifetime (1980). They have told us that ever since, but we’ve been snubbing them. We were dancing! They warned us, but we didn’t twig. We were drinking our drinks carefree! When Byrne, in a stylized preacher style, described the unsettlement of personal perception of reality – or is it perhaps the other way around: the unsettlement of reality that leads to complete disorientation? – he was in the punk spirit of the time and talked about the furnished petty bourgeois life that lacked meaning. But the years passed, GAN came and the song re-entered the spirit of the time, but now as an ominous description of the present dystopia.

GAN stands for generative adversarial networks and is a method that was proposed in 2014 by a team of researchers at the University of Montreal and is at the forefront of artificial intelligence technologies. The GAN consists of two self-learning networks, one of which looks for patterns in any data set and makes copies, while the other evaluates the suggestions of the former and if it finds differences between the original and the copy, sends the “job” back for reprocessing; until it is no longer possible to distinguish one from the other. With the tests to date, the field of its production spans a large part of what we used to call “artistic creation”: music, speech, painting, photography… Imagine two robot artists, one tossing “ideas”, borrowing data from here and there, the other examining and assessing them and the final result of the composition of the actions of the two robots, is the “artwork”. For example, in October 2018, three french students made a portrait of an 18th-century nobleman via GAN, and the picture sold at Christie’s auction for $ 432,000; with Christie’s celebrating the introduction of artificial intelligence into art.

Where GAN has yielded the most is the field of photography, thanks to the work of NVIDIA, which, without changing the architecture of competing networks, supported it with enormous computing power. The result was that photographs of individuals could now be produced, combining features of physical persons, which could not be identified as fabricated. The first photo shows the results of tests performed in 2014. On the right, the originals and on the left the series made by GAN: gray, blurred and not at all realistic. The second (below) photo, however, shows the results of 4 years of testing and power increase: convincing photos of “normal” people, made with cutting and sewing by artificial intelligence. The fabricated ones are the ones that have resulted from the combination of each one from the above series (source) with each one from the left column (destination).

You can guess what situations we will find ourselves in because of such technology: fake news and alternative facts will become misdemeanors as the well-made, artistic and meticulously made to the last detail alternative realities unfold.

And you may ask yourself / How did I get here? / And you may ask yourself…

bytes & genes | cyborg #14 – 02/2019