The book "A. Text: a text, algorithmic text, artificial text" is an easy guidebook on algorithms and their biases for those who
1. use computers, smartphones, apps, or Internet,
2. are neither a programmer, a mathematician, nor a statistician,
3. reside in a country, where citizens are being digitally administered.
Algorithms are considered neutral. They are actually not because they are man-made tools.
Personal, social, and political aspects perform crucial roles in the production and usage of tools. And data, learning material for algorithms, are exactly same as society. It only reflects (and reinforces) the status quo.
For these reasons, algorithms perform a task that no one has expected: human discrimination. But not everyone is discriminated, but only those who are neither white-skinned, male, heterosexual nor indigenous.
This reality is called "algorithmic bias". Such algorithms are being applied worldwide. More than half of the population is being/can be invisibly discriminated.
Being mindful of these situation, a dataset was created, based on the texts about “algorithmic bias” collected by the project maker. After learning from this dataset, an algorithm—Recurrent Neural Network—wrote/generated four different, funny, boring, and meaningful texts. The project author also wrote one, based on the same dataset.
Being consisted of these five texts, "A. Text" fills a lacuna between believing and imagining the future of algorithms and human life.
The ultimate motivation of making this guidebook was to understand myself, as a non-programmer, the followings. What is an algorithm? Which influences algorithms and artificial intelligence have on me and on society? What are the meanings of generated texts? What might the future of books look like? But most of all, how can we create objective algorithms in order to build a digitally equal and democratic society?