tante — Artificial Saviors

1
20345
"Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM))

tante

Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.

Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.

Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.

A Promise Fulfilled?

For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.

In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.

Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.

Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.

Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.

Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.

Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:

The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)

Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.

These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.

But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.

And then came neural networks.

Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.

Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.

There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.

Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.

Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?

Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.

In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.

While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?

John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.

Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?

But how can a neural network save us if it can’t even distinguish monkeys from humans?

Thy Kingdom Come 2.0

The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.

Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]

But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.

But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?

"Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
“Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?

The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?

One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.

The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.

Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.

Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.

While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.

Singulatarianism

Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.

Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.

Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.

According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.

While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.

It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.

With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ).  Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.

Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.

Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.

While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.

The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.

Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.

Daemons in the Digital Ether

Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.

Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?

Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration

The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.

The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.

Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.

With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.

When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.

On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.

Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.

Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.

But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:

Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)

For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”

Amen

The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.

But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.

The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.

But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.

"Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
“Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.

Amen.

_____

tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.

Back to the essay

_____

Notes

[1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.

[2] https://www.w3.org/RDF/.

[3] Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions

[4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.

[5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.

[6] We used to call them “statisticians.”

[7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.

_____

Works Cited

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here