In programming, a variable is a storage location which contains some known or unknown quantity of data referred to as a value. The identifier in the source code can be bound to a value over time, and the value of the variable may change during the course of the program execution. Consequently, “A new media object is not something fixed once and for all but can exist in different, potentially infinite versions.Manovich, 2002 On the web, the user became a variable in itself and through implementing actions as variables, it enables the storage of data about the users behaviour that can be analysed in order to improve a given functionality. This information about the user can then be used by a program to automatically customise the interface composition as well as to create the content itself. Thus, enabling the user to play an active role in determining the order in which the already generated elements are accessed. Consequently, programmable media position the users as an active audience; able to manipulate and customise their experience of it. This is what variability means : that the elements and the structure are independent of each other. According to this, Manovitc see the “new media technology acts as the most perfect realization of the utopia of an ideal society composed from unique individuals. New media objects assure users that their choices — and therefore, their underlying thoughts and desires — are unique, rather than pre-programmed and shared with others.Manovich, 2002

Nonetheless, the way tech-companies use this inherent web feature doesn’t really feel like a customisation tool for users to act on their web environment. Instead, It appears more as a tool for tech companies to optimise their profit without relying on any user’s choice or stated preference. It becomes all about revealed preferences the user is unaware of disclosing. Indeed, if you’ve spent any time using the web today you’ve most likely already been the subject of what is called an A/B test. Website are doing testing of various kinds to experiment with different layouts, or potentially even to show you different kinds of content depending on who you are. The process is simple, an A/B test is a randomised experiment with two variants of a webpage: A and B. Users of a website are split into two groups. Group A is shown the control version, the website as it is, and the group B are shown the treatment version, as variations of the former. After some time has passed, the user’s behaviour is analysed to determine which version has edged out the others with a statistically significant improvement: such as more clicks, more time spent on the website and a higher conversion rate. If you’ve ever compulsively reloaded the New York Times, you might notice that the headlines from the same article will change or you’ll go to a website and not see the same layout as someone else.

Code extract of an experiment from nytimes.com that is A/B testing two variations of a headline. Images From Reisman, D., 2016. A Peek at A/B Testing in the Wild. Freedom to tinker [online]. freedom-to-tinker.com/2016/05/26/a-peek-at-ab-testing-in-the-wild

Such optimisation testing spread quickly across the Web and tech-companies gained access to tools that let them run their businesses like ongoing science experiments. Indeed, the platforms and tools to run these experiment are widely available and accessible to non-technical website operators and platforms that allows publishers and designers to easily carry out A/B testing. From front-end user-interface changes to backend algorithms, from the smallest startups to the biggest political campaigns, from search engines to retailers, from social networking services to online newspapers, online controlled experiments are now used to make data-driven decisions. BuiltWith, an online company analysing websites to see what type of web technologies are in use scanned 57,363 Detections

BuiltWith, A/B Testing Usage Distribution in the Top 1 Million Sites, [ONLINE] Available here: https://trends.builtwith.com/analytics/a-b-testing [Accessed 24 March 2019]

of A/B Testing in Alexa Top 1 Million Sites.

Alexa Traffic Rank is a key metric published from Alexa Internet analytics, it is designed to be an estimate of a website's popularity. Tthe rank is calculated from a combination of daily visitors and page views on a website over a 3-month period. The Alexa Traffic Rank can be used to monitor the popularity trend of a website and to compare the popularity of different websites.



The Academic paper, Who’s the Guinea Pig?, Investigating Online A/B/n Tests in-the-Wild

Northeastern University, 2019. Who’s the Guinea Pig? Investigating Online A/B/n Tests in-the-Wild. Available at:https://www.shanjiang.me/publications/fat19_paper.pdf?fbclid=IwAR2d6Lq4hwLUe4LpGW_X7xaw2wX7E1r0F8qngpTEFToW9FcByMB5kuw6qMY

from Northeastern University analysed 575 websites running A/B test between Juanary and March 2019. This paper describe an experimental methodology to reveal when a website is using A/B testing and determine what factors appear to be going into the decision to show particular content through an analysis of a specific platform called Optimezly.

Optimizely is an American company that makes customer experience optimization software for other companies. The Optimizely platform technology provides A/B testing tools. https://www.optimizely.com/

They analyse possible ethical pitfalls of this process with different case studies. Insisting that the most alarming fact is that these experiments are not random, but can target a specific audience for specific tests. Indeed many websites target users based on IP and geolocation. This already shows biases as 13.3% of websites are only targeting US users. Another study

Reisman, D., 2016. A Peek at A/B Testing in the Wild. Freedom to tinker [online].Available here

lead by an law student of Princenton depict how even 'non-profit' organisations use these tools. Reisman give the example of charity: water and the Human Rights Campaign which both have experiments defined to change the default donation amount a user might see in a pre-filled text box according to the data they can collect from users visting their websites. Online you can find long lists of similar studies, of which it’s interesting to note that most of them have been motivated by the facebook emotional contagion scandal in 2014.

Facebook conducted a massive psychological experiment on 689,003 users, manipulating their news feeds to assess the effects on their emotions. The details of the experiment were published in an article entitled "Experimental Evidence Of Massive-Scale Emotional Contagion Through Social Networks" Available here: https://www.pnas.org/content/111/24/8788.full

Even if it’s hard to find evidence of sinister motives behind the use of A/B testing, the fact remains that we are being experimented on constantly and it’s not too much of a step to realize that the quest for personalisation might mean that we eventually have no shared experience of the internet.

A/B tests are used regarding content personalisation and are consequently reshaping the web-publishing discipline. What could be a data informed process has become a data driven process. The nature of success on the Web is being redefined by tools to optimise websites where statistics became the only argument of authority. It is also creating a fundamental shift in the role of the designers as they are heavily used in design decision-making. The webdesigner now has to endorse the role of the experiment leader to test and tryout design elements. Testing comes to represent unbiased truth and the foundation and justification of every detail of the interface. Extremely large data sets are analysed to reveal patterns, trends, and associations, especially relating to human behaviour and interactions. But are they a reflection of reality or is reality twisting under their effect? The desire for quantification poses a set of problems in the design process. It’s very difficult to measure the purpose of an interface beyond an auction or a specified value. It can quickly turn into a tyranny of taste that overrules the authority of the designer. The famous 41 shades of blue test run by google is a testiment to it. In 2009, Douglas Bowman then working as a designer for Google quit, writing in his farwellpost that: “A team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better.”

Bowman, D.,2014. Goodbye, Google [ONLINE] Available at: https://stopdesign.com/archive/2009/03/20/goodbye-google.html

While Google argue that the designer's story is not quite accurate, the thought experiment from this example reflects what is the intended designer's position.

Within this context of A/B testing procedures, we must recall that: “Tools and equipment never exist simply in themselves, but always refer to that which can be done with themVerbeek, 2005.. Here, it comes down to questioning if this data‑driven process could rather be a data data informed process. A/B tests do not allow the testing of a vision for the holistic experience of the interface, but only small details separate from the whole picture. With this process the role of the designer is reduced to embellishing the elements of the front-end: design is convened in the role of adornming content structure that is decided without it. The designer’s position within the backend/frontend dichotomy is hard to establish. Anne Burdick, argue that the designer’s role should be to participate in the development of the structure: “Design means shaping knowledge and endowing it with form; the field of design encompasses structures of argument”. Burdick, 2016. However, most of the time, the designer is downgraded to the front end and involved in creating a presentational surface that makes up a user interface. Indeed, you can come across the term front-end designer but there is no such thing as a back-end designer. I would argue that this is indeed the limited view one has of the interface design, and digital programming in general: to focus only on what appears on the screen. Shouldn’t the designer act as an intermediary and also think about the data container and its structure?

Selected bibliography:

-Bruno, I., Prévieux, J., Didier E., 2014. Statactivisme, comment lutter avec des nombres, Zones
-Burdick, A. 2016. Digital humanities, MIT press.
-Reisman, D., 2016. A Peek at A/B Testing in the Wild. Freedom to tinker [online].Available here
-Fuller, M., 2008. Software studies. MIT Press. Available here
-Manovich, L., 2002. The Language Of New Media. MIT Press.
-Masure, A., 2017. Design et Humanités Numériques. Editions B42
-MIT Technologie Review, 2014. 50 Smartest Companies, March/April.
-Northeastern University, 2019. Who’s the Guinea Pig? Investigating Online A/B/n Tests in-the-Wild. Available here
-Gupta, Somit & Ulanova, Lucy & Bhardwaj, Sumit & Dmitriev, Pavel & Raff, Paul & Fabijan, Aleksander. 2018, The Anatomy of a Large-Scale Online Experimentation Platform. 10.1109/ICSA.2018.00009.

x:

????

y:

????