1. Cognitive technologies: mapping the Internet governance debate
by Goran S. Milovanović
This paper
• provides a simple explanation of what cognitive technologies are.
• gives an overview of the main idea of cognitive science (why human minds and computers could
be thought of as being essentially similar kinds of systems).
• discusses in brief how developments in engineering and fundamental research interact to result
in cognitive technologies.
• presents an example of applied cognitive science (text‑mining) in the mapping of the Internet
governance debate.
Introduction
Among the words that first come to mind
when Internet governance (IG) is mentioned,
complexity surely scores in the forerunners.
But do we ever grasp the full complexity of such
issues? Is it possible for an individual human
mind ever to claim a full understanding of a
process that encompasses thousands of actors,
a plenitude of different positions, articulates an
agenda of almost non‑stop ongoing meetings,
conferences, forums, and negotiations, while
addressing the interests of billions of Internet
users? With the development of the Internet,
the Information Society, and the Internet
governance processes, the amount of information
that demands effective processing in order for
us to act rationally and in real time increases
tremendously. Paradoxically, the Information
Age, marked by the discovery of the possibility of
digital computers in the first half of the twentieth
century, demonstrated the shortcomings
in processing capacities very quickly as it
progressed. The availability of home computers
and the Internet have been contributing to this
paradox since the early 1990s: as the number of
networked social actors grew, the governance
processes naturally faced increased demand for
information processing and management. But
this is not simply a question of how many raw
processing power or how much memory storage
we have at our disposal. The complexity of social
processes that call for good governance, as well
as the amount of communication that mediates
the actions of the actors involved, increase up
to a level where qualitatively different forms of
management must come into play. One cannot
understand them by simply looking at them, or
listening to what everyone has to say: there are
so many voices, and among billions of thoughts,
ideas, concepts, and words, there are known
limits to human cognition to be recognised.
The good news is, as the Information Age
progresses, new technologies, founded upon the
scientific attempts to mimic the cognitive functions
of the human mind, are becoming increasingly
available. Many of the computational tools that
were only previously available to well‑funded
research initiatives in cognitive science and
artificial intelligence can nowadays run on
average desktop computers and laptops. With
increased trends of cloud computing and the
parallel execution of thousands of lines of
computationally demanding code, the application
2. of cognitive technologies in attempts to discover
meaningful regularities in vast amounts of
structured and unstructured data is now within
reach. If the known advantages of computers
over human minds – namely, the speed of
processing that they exhibit in repetitive,
well‑structured, daunting tasks performed
over huge sets of data – can combine with at
least some of the advantages of our natural
minds over computers, what new frontiers
are touched upon? Can computers do more
than beat the best of our chess players? Can
they help us to better manage the complexity
of societal consequences that have resulted
from our own discovery and the introduction
of digital technologies to human societies? How
can cognitive technologies help us analyse and
manage global governance processes such
as IG? What are their limits and how will they
contribute to societal changes themselves? These
are the questions that we address in this short
paper, tackling the idea of cognitive technology
and providing an illustrative example of their
application in the mapping of the IG debate.
Box 1: Cognitive technologies
2
• The Internet links people; networked
computers are merely mediators.
• By linking people globally, the Internet
has created a network of human minds –
systems that are a priori more complex
than digital computers themselves.
• The networked society exchanges a vast
amount of information that could not have
been transmitted before the inception of
the Internet: management and governance
issues become critical.
• New forms of governance introduced:
global IG.
• New forms of information processing
introduced: cognitive technologies. They
result from the application of cognitive
science that studies both natural and
artifi cial minds.
• Contemporary cognitive technologies
present an attempt to mimic some of the
cognitive functions of the human mind.
• Increasing raw processing power (cloud
computing, parallelisation, massive
memory storage) nowadays enables for
a widespread application of cognitive
technologies.
• How do they help and what are their limits?
The main idea: mind as a machine
For obvious reasons, many theoretical
discussions and introductions to IG begin with
an overview of the history of the Internet. For
reasons less obvious, many discussions about
the Internet and the Information Society tend to
suppress the historical presentation of an idea
that is clearly more important than the very idea
of the Internet. The idea is characteristic of the
cognitive psychology and cognitive science of
the second half of the twentieth century, and
it states – to put it in a nutshell – that human
minds and digital computers possibly share many
important, even essential properties, and that
this similarity in their design – which, as many
believe, goes beyond pure analogy – opens a
set of prospects towards the development of
artifi cial intelligence, which might prove to be
the most important technological development
in the future history of human kind if achieved.
From a practical point of view, and given the
current state of the technological development,
the most important consequence is that at least
some of the cognitive functions of the human
mind can be mimicked by digital computers.
The fi eld of computational cognitive psychology,
where behavioural data collected from
human participants in experimental settings
are modelled mathematically, increasingly
contributes to our understanding that the
human mind acts in perception, judgment,
decision‑making, problem‑solving, language
comprehension, and other activities as if it is
governed by a set of natural principles that can
be eff ectively simulated on digital computers.
Again, even if the human mind is essentially
diff erent from a modern digital computer, these
fi ndings open a way towards the simulation
of human cognitive functions and their
enhancement (given that digital computers are
able to perform many simple computational
tasks with effi ciency which is orders of
magnitudes above the effi ciency of natural
minds).
An overview of cornerstones in the historical
development of cognitive science is given
in Appendix I. The prelude to the history of
cognitive science belongs to the pre World
War II epoch, when a generation of brilliant
mathematicians and philosophers, certainly
best represented by an ingenious British
mathematician Alan Mathison Turing (1912–1954),
paved the way towards the discovery of the
limits formalisation in logic and mathematics
3. in general. By formalisation we mean the
expression of any idea in a strictly defi ned,
unambiguous language, precisely enough that
no two interpretants could possibly argue over
its meaning. The concept of formalisation is
important: any problem that is encoded by a set
of transformations over sequences of symbols –
in other words, by a set of sentences in a precise,
exact, and unambiguous language – is said to
be formalised. The question of whether there
is meaning to human life, thus, can probably
be never formalised. The question of whether
there is a certain way for the white to win a
chess game given its initial advantage of having
the fi rst move can be formalised, since chess is
a game that receives a straightforward formal
description through its well‑defi ned, exact rules.
Turing was among those to discover a way of
expressing any problem that can be formalised
at all in the form of a computer program for
abstract computational machinery known as the
Universal Turing Machine (UCM). By providing
the defi nition for his abstract computer, he
was able to show how any mathematical
reasoning – and all mathematical reasoning
takes place in strictly formalised languages
– can be essentially understood as a form of
computation. Unlike computation in a narrow
sense, where its meaning usually refers to basic
arithmetic operations with numbers only, this
broad sense of computation encompasses all
precisely defi ned operations over symbols and
sets of symbols in some predefi ned alphabet.
The alphabet is used to describe the problem,
while the instructions to the Turing Machine
control its behaviour which essentially presents
no more than the translation of sets of symbols
from their initial form to some other form, with
one of the possible forms of transformation
being discovered and recognised as a solution
to the given problem – the moment when
the machine stops working. More important,
from Turing’s discovery, it followed that formal
reasoning in logic and mathematics can be
performed mechanically, i.e., an automated
device could be constructed that computes any
computable function at all. The road towards the
development of digital computers was thus open.
But even more important, following Turing’s
analyses of mechanical reasoning, the question
of whether the human mind is simply a biological
incarnation of universal computation – a complex
universal digital computer, instantiated by
biological evolution instead being a product
of design processes, and implemented in
carbon‑based organic matter instead of silicon
– was posed. The idea that human intelligence
shares the same essential properties as Turing’s
mechanised system of universal computation
proved to be the major driving force in the
development of post World War II cognitive
psychology. For the fi rst time in history, mankind
not only developed the means of advancing
artifi cial forms of thinking, but instantiated the
fi rst theoretical idea that saw the human mind
as a natural, mechanical system whose abstract
structure is at least, in a sense, analogous to
some well‑studied mathematical description.
A way for the naturalisation of psychology was
fi nally opened, and cognitive science, as the
study of natural and artifi cial minds, was born.
Roughly speaking, three important phases in
the development of its mainstream can be
recognised during the course of the twentieth
century. The fi rst important phase in the
development of cognitive science was marked
by a clear recognition that, at least in principle,
the human mind could operate on principles
that are exactly the same as those that govern
universal computation. Newell and Simon’s
Physical Systems Hypothesis [1] provides probably
the most important theoretical contribution to
this fi rst, pioneering phase. Attempts to design
universal problem solvers and design computers
that successfully play chess were characteristic
of the fi rst phase. The ability to produce and
understand natural language was recognised as
a major characteristic of an artifi cially intelligent
system. An essential critique of this fi rst phase in
the historical development of cognitive science
was provided by the philosopher Hubert Dreyfus
in his classic What Computers Can’t Do in 1972.
[2] The second phase, starting approximately
in the 1970s and gaining momentum during
the whole 1980s and 1990s, was characterised
by an emphasis on the problems of learning,
the restoration of importance of some of the
pre World War II principles of behaviouristic
psychology, the realisation that well‑defi ned
formal problems such as chess are not really
representative of the problems that human
minds are really good at solving, and the
exploitation of a class of computational models
of cognitive functions known as neural networks.
The results of this second phase, marked mainly
by a theoretical movement of connectionism,
showed how sets of strictly defi ned, explicit
rules, almost certainly miss describing
adequately the highly fl exible, adaptive nature of
the human mind. [3a,3b] The third phase is rooted
in the 1990s, when many cognitive scientists
began to understand that human minds
essentially operate on variables of uncertain
Geneva Internet Conference 3
4. value, with incomplete information, and in
uncertain environments. Sometimes referred
to as the probabilistic turn in cognitive science, [4]
the important conclusion of this latest phase in
the development of cognitive science is that the
language of probability theory, used instead of
(or in conjunction with) the language of formal
logic, provides the most natural way to describe
the operation of the human cognitive system.
The widespread application of decision theory,
describing the human mind as a biological organ
that essentially evolved in order to perform the
function of choice under risk and uncertainty, is
characteristic of the most recent developments
in this third, contemporary phase in the history
of cognitive science. [5]
Box 2. The rise of cognitive science
In summary:
4
• Fundamental insights in twentieth century
logic and mathematics enabled a fi rst
attempt at a naturalistic theory of human
intelligence.
• Alan Turing’s seminal contribution to the
theory of computation enabled a direct
parallel between the design of artifi cially
and naturally intelligent systems.
• This theory, in its mainstream form, sees
no essential diff erences between the
structure of the human mind and the
structure of digital computers, both viewed
at the most abstract level of their design.
• Diff erent theoretical ideas and
mathematical theories were used to
formalise the functioning of the mind
during the second half of the twentieth
century. The ideas of physical symbol
systems, neutral networks, and probability
and decision theory, played the most
prominent roles in the development of
cognitive science.
The machine as a mind: applied
cognition
As widely acknowledged, humanity still did not
achieve the goal of developing true artifi cial
intelligence. What, then, is applied cognition?
At the current stage of development, applied
cognitive science encompasses the application
of mostly partial solutions to partial cognitive
problems. For example, we cannot build software
that reads Jorge Luis Borges’ collected short
stories and then produces a critical analysis from
a viewpoint of some specifi c school of literary
critique. One would say not many human beings
can actually do that. But we can’t accomplish
even simpler tasks; with the general rule that
as cognitive tasks get more general, the harder
it gets to simulate them. But, what we can do,
for example, is to feed the software with a large
collection of texts from diff erent authors, let it
search through it, recognise the most familiar
words and patterns of word usage, and then
successfully predict the authorship of a previously
unknown text. We can teach computers to
recognise some visual objects by learning with
feedback from their descriptions in terms of
simpler visual features, and we are getting good
at making them recognise faces and photography.
We cannot ask a computer to act creatively in the
way that humans do, but we can make them prove
complicated mathematical theorems that would
call for years of mathematical work by hand,
and even produce aesthetically pleasing visual
patterns and music by sampling, resampling, and
adding random but not completely irregular noise
to initial sound patterns.
In cognitive science, engineers learn from
psychologists, and vice versa, mathematical
models, developed initially to solve purely
practical problems, are imported in psychological
theories of cognitive functions. The goals of the
study that cognitive engineers and psychologists
pursue are only somewhat diff erent. While
the latter addresses mainly the functioning of
natural minds, the former does not have to
constrain a solution to some cognitive problem
by imposing on it the limits of the human mind
and realistic neurophysiology of the brain.
Essentially, the direction of the arrow usually
goes from mathematicians and engineers
towards psychologists: the ideas proposed in the
fi eld of artifi cial intelligence (AI) are tested only
after having them dressed in a suit of empirical
psychological theory. However, engineers and
mathematicians in AI discover their ideas by
observing and refl ecting on the only known truly
intelligent system, namely, the real, natural,
human mind.
Many computational methods were thus fi rst
discovered in the fi eld of AI before they were
tried out as explanations of the functioning of the
human mind. To begin with, the idea of physical
symbol systems, provided by Newell and Simon
in the early formulation of cognitive science,
presents a direct interpretation of a symbolic
5. theory of computation initially proposed by
Turing and the mathematicians in the fi rst half of
the twentieth century. Neural networks, which
present a class of computational models that
can learn to respond to complex external stimuli
in a fl exible and adaptive way, were clearly
motivated by the empirical study of learning
in humans and animals. However, they were
fi rst proposed as an idea in the fi eld of artifi cial
intelligence, and then only later applied in
human cognitive psychology. Bayesian networks,
known also as causal (graphical) models[6],
represent structured probabilistic machinery
that deal effi ciently with learning, prediction, and
inference tasks, and were again fi rst proposed
in AI before heavily infl uencing the most recent
developments in psychology. Decision and game
theory, to provide an exception, were initially
developed and refl ected on in pure mathematics
and mathematical economics, before being
imported into the arena of empirical psychology,
were they still represent both a focal subject
of experimental research and a mathematical
modelling toolkit.
The current situation in applying the known
principles and methods of cognitive science
can be described as eclectic. In applications to
real‑world problems, and not necessarily to
describe truthfully the functioning of the human
mind, algorithms developed on the behalf of
cognitive scientists do not need to obey any
‘theoretical purity’. Many principles discovered in
empirical psychology, for example reinforcement
learning, are applied without necessary applying
them in exactly the same way as it is thought that
they operate in natural learning systems.
As already noted, it’s uncertain whether applied
cognition will ever produce any AI that will fully
resemble the natural mind. A powerful analogy
is proposed: for example, people rarely admit
that the human kind has never understood
natural fl ying in birds or insects, in spite of the
fact that we have and use artifi cial fl ying of
airplanes and helicopters. The equations that
would correctly describe the natural, dynamic,
biomechanical systems that fl y are simply too
complicated and, in general, they cannot be
analytically solved even if they can be described.
But we have invented artifi cial fl ying by refl ecting
on the principles of the fl ight of birds, without
ever having a complete scientifi c understanding
it. Maybe AI will follow the same path: we may
have useful, practical, and powerful cognitive
applications, even without ever understanding
the functioning of the human mind in totality.
The main goal of current cognitive technologies,
the products of applied cognitive science, is to
help natural human minds to better understand
very complex cognitive problems – those that
would be hard to comprehend by our mental
functions solely – and to increase the speed and
amount of processing that some cognitive tasks
require. For example, studying thousands of text
documents in order to describe, at least roughly,
what are the main themes that are discussed
in them, can be automated to a degree to help
human beings get the big picture without actually
reading through all of them.
Box 3. Applied cognition
• Cognitive engineers and cognitive
psychologists learn from each other. The
former refl ect on natural minds and build
algorithms that solve certain classes of
cognitive problems, which leads directly
to applications, while the latter test the
proposed models experimentally to
determine whether they describe the
workings of the human mind adequately.
• Many principles of cognitive psychology
are applied to real-world problems without
necessary mimicking the corresponding
faculties of the human mind exactly. We
discover something, than change it to suit
our present purpose.
• We provide partial solutions only, since
global human cognitive functioning is
still too diffi cult to describe. However,
even partial solutions that are nowadays
available skyrocket what computers could
have done only decades ago.
• Contemporary cognitive technologies
focus mainly on reducing the complexity of
some cognitive tasks that would be hard to
perform by relying on our natural cognitive
functions only.
Example: applying text-mining to map
the IG debate
The NETmundial Multistakeholder Statement
of São Paulo1 – the fi nal outcome document
of NETmundial (22, 23 April 2014), the Global
Multistakeholder Meeting on the Future of IG
– resulted from a political process of immense
complexity. Numerous forms of inputs, various
1 http://netmundial.br/netmundial‑multistakeholder‑statement/
Geneva Internet Conference 5
6. expertise, several preformed bodies, a mass
of individuals and organisations representing
diff erent stakeholders, all interfaced both
online and in situ, through a complex timeline
of the NETmundial process, to result in
this document. On 3 April, the NETmundial
Secretariat prepared the fi rst draft, previously
processing more than 180 content contributions.
The fi nal document resulted following the
negotiations in São Paulo, based on the second
draft that was itself based on incorporating
numerous suggestions made in comments to
the fi rst draft. The multistakeholder process of
document drafting introduced in its production
is already seen by many as the future common
ingredient of global governance processes in
general. By the complexity of the IG debate
alone, one could have anticipated that more
complex forms of negotiations, decision‑shaping,
and crowdsourced document production
will naturally emerge. As the complexity
of the processes under analysis increases,
the complexity of tools used to conduct the
analyses must increase also. At the present
point of its development, DiploFoundation’s
Text‑Analytics Framework (DTAF) operates
on the Internet Governance Forum (IGF) Text
Corpus, a collection of all available session,
workshop, and panel transcripts from the
IGF 2006–2014, encompassing more than
600 documents and utterances contributed
on behalf of hundreds of speakers. By any
standards in the fi eld of text-mining – an area
of applied cognitive science which focuses on
statistical analyses of patterns of words that
occur in natural language – both the NETmundial
collection of content contributions and the IGF
Text Corpus present rather small datasets. The
analyses of text corpora that encompass tens of
thousands of documents are rather common.
Imagine incorporating all websites, social media,
newspaper and journal articles on IG, in order to
perform a full‑scale monitoring of the discourse
of the IG debate, and you’re already there.
Obviously, the cognitive task of mapping
the IG debate represented even only by two
text corpora that we discuss here, is highly
demanding. It is questionable whether a single
policy analyst or social scientist would manage
to comprehend the full complexity of the IG
discourse in several years of dedicated work.
Here we illustrate the application of text‑mining,
which is a typical cognitive technology used
nowadays, to the discovery of useful, structured
information in large collections of texts. We will
focus our attention on the NETmundial corpus
6
of content contributions and ask the following
question: What are the most important themes,
or topics, that have appeared in this set of more
than 180 contributions, including the NETmundial
Multistakeholder Statement of São Paulo? In
order to answer this question, we fi rst need to
hypothesise a model of how the NETmundial
discourse was produced. We rely on a fairly
well‑studied and frequently applied model
in text‑mining, known by its rather technical
name of Latent Dirichlet Allocation (LDA, see
Methodology section in Appendix II. [7,8,9]). In
LDA, it is assumed that each word (or phrase)
in some particular discourse is produced from
a set of underlying topics with some initially
unknown probability. Thus, each topic is defi ned
as a probability distribution across the words
and phrases that appear in the documents. It
is also assumed that each document in the text
corpus is produced from a mixture of topics,
each of them weighted diff erently in proportion
to their contribution to the generation of the
words that comprise the document. Additional
assumptions must be made about the initial
distribution of topics across documents. All
these assumptions are assembled in a graphical
model that describes the relationships between
the words, documents, and latent topics. One
normally runs a number of LDA models that
encompass diff erent number of topics and rely
on the statistical properties of the obtained
solutions to recognise which one provides
the best explanation for the structure of the
text corpus under analysis. In the case of the
NETmundial corpus of content contributions,
an LDA model with seven topics was selected.
Appendix II presents fi fteen most probable
words generated by each of the seven underlying
topics. By inspecting which words are most
characteristic in each of the topics discovered in
this collection of texts, we were able to provide
meaningful interpretations2 of the topics. We
fi nd that NETmundial content contributions were
mainly focused on questions of (1) human rights,
(2) multistakeholderism, (3) global governance
mechanism for ICANN, (4) information security,
(5) IANA oversight, (6) capacity building, and (7)
development (see Table A‑2.1 in Appendix II).
In order to help a human policy analyst in their
research on the NETmundial, for example, we
could determine the contribution of each of
these seven topics to each document from the
2 I wish to thank Mr Vladimir Radunović of DiploFoundation
for his help in the interpretation of the topics obtained
from the LDA model of the NETmundial content
contributions.
7. collection of content contributions, so that the
analyst interested in just some aspects of this
complex process could select only the most
relevant documents. As an illustration, Figure
A‑2.1 in Appendix II presents the distributions
of topics found in the content contributions of
two important stakeholders in the IG arena,
civil society and government. It is easily read
from the displays that the representatives of the
organisations of civil society strongly emphasised
human rights (Topic 1 in our model) in their
contributions, while representatives of national
governments focused more on IANA oversight
(Topic 5) and development issues (Topic 7).
Figure A‑2.2 in Annex II presents the structure
of similarities between the most important
words in the human rights topic (Topic 1,
Table A‑2.1 in Annex II). We fi rst selected only
the content contributions made on behalf of
civil society organisations. Then we used the
probability distributions of words across topics
and the distribution of topic weights across the
documents to compute the similarities between
all relevant words. Since similarity computed in
this way is represented in a high‑dimensional
space and thus not suitable for visualisation,
we have decided to use the graph represented
in Figure A‑2.2. Each node in Figure A‑2.2
represents a word, and each word receives
exactly three arrows. These arrows originate
at nodes that represent those words that are
found to be among the three most similar words
to the target word. Each word is an origin of as
many links as there are words in whose set of
the three most similar words it is found. Thus
we can use graph representation to assess the
similarities in the patterns of word usage across
diff erent collections of documents. The lower
display in Figure A‑2.2 presents the similarity
structure in the human rights topic extracted
from governmental content contributions to
NETmundial only. By comparing the two graphs,
we can see that only slight diff erences appear,
in spite of the fact that the importance of the
human rights topic is diff erent in the content
contributions of these two stakeholders. Thus,
they seem to understand the conceptual realm
of human rights in a similar way, but tend to
accentuate it diff erently in the statements of
their respective positions.
Conclusions that stream from our cognitive
analysis of the NETmundial content contributions
could have been brought by a person who did
not actually read any of these documents at all.
The analysis does rely on some built‑in human
expert knowledge, but once set, it can produce
this and similar results in a fully automated
manner. While it is not advisable to use this
and similar methods instead of a real, careful
study of the relevant documents, their power
in improving on the work of skilled, thoroughly
educated scholars and professionals should be
emphasised.
Concluding remarks
However far we are from the ideal of true
artifi cial intelligence, and given that the defi nition
of what true artifi cial intelligence might be is
not very clear in itself, cognitive technologies
that have emerged after more than 60 years of
study of the human mind as a natural system
are nowadays powerful enough to provide
meaningful application and valuable insight.
With the increasing trends of big data, numerous
scientists involved in the development of more
powerful algorithms and even faster computers,
cloud computing, and means for massive data
storage, even very hard cognitive problems will
become addressable in the near future. The
planet, our ecosystem, now almost completely
covered by the Internet, will introduce an
additional layer of cognitive computation, making
information search, retrieval, data mining,
and visualisation omnipresent in our media
environments.
A prophecy to end this paper with: not only
will this layer of cognitive computation bring
about more effi cient methods of information
management and extend our personal cognitive
capacities, it will itself introduce additional
questions and complications to the existing IG
debate. Networks intermixed with human minds
and narrowly defi ned artifi cial intelligences
will soon begin to present the major units of
representing interests and ideas, and their
future political signifi cance should not be
underestimated now when their development is
still in its infancy. They will grow fast, as fast as
the fi eld of cognitive science did.
Geneva Internet Conference 7
8. Bibliography
[1] Newell A and Simon HA (1976) Computer Science as Empirical Inquiry: Symbols and Search.,
8
Communications of the ACM, 19(3), 113–126, doi:10.1145/360018.360022
[2] Dreyfus H (1972) What computers can’t do. New York: MIT Press, ISBN 0‑06‑090613‑8
[3a] Rumelhart DE, McClelland JL and the PDP Research Group (1986) Parallel Distributed Processing:
Explorations in the Microstructure of Cognition. Volume 1: Foundations. Cambridge, MA: MIT Press.
[3b] McClelland JL, Rumelhart DE and the PDP Research Group (1986) Parallel Distributed Processing:
Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models.
Cambridge, MA: MIT Press.
[4] Oaksford M and Chater N (2009) Précis of Bayesian rationality: The probabilistic approach to human
reasoning. Behav Brain Sci 32(1), 69–84. doi: 10.1017/S0140525X09000284
[5] Glimcher P (2003) Decisions, Uncertainty, and the Brain. The Science of Neuroeconomics. Cambridge,
MA: MIT Press.
[6] Pearl J (2000) Causality. Models, Reasoning and Inference. Cambridge: Cambridge University Press.
[7] Blei DM, Ng AY, Jordan MI (2003) Laff erty J ed. Latent Dirichlet Allocation. Journal of Machine Learning
Research 3(4–5), 993–1022. doi:10.1162/jmlr.2003.3.4‑5.993
[8] Griffi thsTL, Steyvers M and Tenenbaum JB (2007) Topics in semantic representation. Psychological
Review 114, 211 244. http://dx.doi.org/10.1037/0033‑295X.114.2.211
[9] Grün B and Hornik K (2011) topicmodels: An R Package for Fitting Topic Models. Journal of Statistical
Software 40(3). Available at http://www.jstatsoft.org/v40/i13
9. Appendix I
Timeline of cognitive science
Year Selected developments
1936 Turing publishes On Computable Numbers, with an Application to the
Entscheidungsproblem. Emil Post achieves similar results independently of Turing.
The idea that (almost) all formal reasoning in mathematics can be understood as a
form of computation becomes clear.
1945 The Von Neumann Architecture, employed in virtually all computer systems in use
nowadays, is presented.
1950 Turing publishes Computing machinery and intelligence, introducing what is nowadays
known as the Turing Test for artifi cial intelligence.
1956 • George Miller discusses the constraints on human short‑term memory in
computational terms.
• Noam Chomsky introduces the Chomsky Hierarchy of formal grammars,
enabling the computer modeling of linguistic problems.
• Allen Newell and Herbert Simon publish a work on the Logic Theorist,
mimicking the problem solving skills of human beings; the fi rst AI program.
1957 Frank Rosenblatt invents the Perceptron, an early neural network algorithm for
supervised classifi cation. The critique of the Perceptron published by Marvin
Minsky and Seymour Papert in 1969 is frequently thought of as responsible for
delaying the connectionist revolution in cognitive science.
1972 Stephen Grossberg starts publishing results on neural networks capable of
modeling various important cognitive functions.
1979 James J. Gibson publishes The Ecological Approach to Visual Perception.
1982 David Marr, Vision: A Computational Investigation into the Human Representation and
Processing of Visual Information makes a strong case for computational models of
biological vision and introduces the commonly used levels of cognitive analysis
(computational, algorithmic/representational, and physical).
1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vols
1 and 2, are published, edited by David Rumelhart, Jay McClelland, and the PDP
Research Group. The onset of the connectionism (the term was fi rst used by David
Hebb in the 1940s). Neural networks are considered as powerful models to capture
the fl exible, adaptive nature of human cognitive functions.
Geneva Internet Conference 9
10. Year Selected developments
1990s • Probabilistic turn: the understanding slowly develops, in many scientifi c centres
10
and the work of many cognitive scientists, that the language of probability
theory provides the most suitable means of describing cognitive phenomena.
Cognitive systems control the behaviour of organisms that have only
incomplete information about uncertain environments to which they need to
adapt.
• The Bayesian revolution: most probabilistic models of cognition expressed
in mathematical models relying on the application of the Bayes theorem and
Bayesian analysis. Latent Dirichlet Allocation (used in the example in this paper)
is a typical example of Bayesian analysis.
• A methodological revolution is introduced by Pearl’s study of causal (graphical)
models (also known as Bayesian networks).
• John Anderson’s methodology of rational analysis.
1992 Francisco J. Varela, Evan T. Thompson, and Eleanor Rosch publish The Embodied
Mind: Cognitive Science and Human Experience, formulating another theoretical
alternative to classical symbolic cognitive science.
2000s • Decision‑theoretic models of cognition. Neuroeconomics: the human brain as
a decision‑making organ. The understanding of importance of risk and value in
describing cognitive phenomena begins to develop.
• Geoff rey Hinton and others introduce deep learning: a powerful learning
method for neural networks partially based on ideas that already went under
discussion in the early 1990s and 1980s.
11. Appendix II
Topic model of the content contributions to the NETmundial
Methodology. A terminological model of the IG discourse was fi rst developed by DiploFoundation’s IG
experts. This terminological model encompasses almost 5000 IG‑specifi c words and phrases. The text
corpus of NETmundial content contributions in this analysis encompasses 182 documents. The corpus
was pre‑processed and automatically tagged for the presence of the IG‑specifi c words and phrases.
The resulting document‑term matrix, describing the use frequencies of IG specifi c terms across 182
available documents, was modelled by Latent Dirichlet Allocation (LDA), a statistical model that enables
for the recognition of semantic topics (i.e., thematic units) that accounts for the frequency distribution
in the given document‑term matrix. A single topic comprises all IG‑specifi c terms; the topics diff er by the
probability they assign to each IG‑specifi c term. The model selection procedures proceeded as follows.
We split the text corpus into two halves, by randomly assigning documents to the training and the test
set. We fi t the LDA models ranging from two to twenty topics to the training set and then compute the
perplexity (an information‑theoretic, statistical measure of badness‑of‑fi t) of the fi tted models for the
test set. We select the best model as the one with the lowest perplexity. Since the text corpus is rather
small, we repeated this procedure 400 times and looked at the distribution of the number of topics from
the best‑fi tting LDA models across all iterations. This procedure pointed towards a model encompassing
seven topics. We then fi tted the LDA with seven topics to the whole NETmundial corpus of content
contributions. Table A‑2.1 presents the most probable words per topics. The original VEM algorithm was
used to estimate the LDA model.
Table A-2.1. Topics in the NETmundial Text Corpus. The columns represent the topics recovered by the
application of LDA to the NETmundial content contributions. The words are enlisted by their probability
of being generated by each topic.
Topic 1.
Human Rights
Topic 2.
Multi‑stakeholderism
Topic 3.
Global governance
mechanism for
ICANN
Topic 4.
Information
security
Topic 5.
IANA
oversight
Topic 6.
Capacity
building
Topic 7.
Development
right IG internet internet ICANN curriculum internet
human rights stakeholder global security IANA technology IG
principle internet governance service organisation analysis global
cyberspace principle ICANN data function research development
state process need cyber operation education principle
information discuss technical network account blog open
internet issue role country process online governance
protection participation system need review association participation
access ecosystem issue control policy similarity continue
communication need IG information DNS term stakeholder
surveillance role local nation board product access
law multistakeholder principle policy GAC content model
respect governance level eff ective multistakeholder integration organisation
international NETmundial country trade model innovative innovative
charter address state user government public economic
Geneva Internet Conference 11
12. Figure A-2.1. The comparison of civil society and government content contributions to NETmundial.
We assessed the probabilities with which each of the seven topics from the LDA model of the
NETmundial content contributions determine the contents of the documents, averaged across all
documents per stakeholder, normalised and expressed the contribution of each topic in %.
12
13. Figure A-2.2. The conceptual structures of the topic of human rights (Topic 1 in the LDA model of
NETmundial content contributions) for civil society and government contributions. The graphs
represent the 3‑neighbourhoods of the 15 most important words in the topic of human rights (Topic 1 in
the LDA model). Each node represents a word and has exactly three arrows pointed at it: the nodes from
which these arrows originate represent the words found to be among the three words most similarly
used to a word that receives the links.
Civil Society
Government
Geneva Internet Conference 13
14. About the author
Goran S. Milovanović is a cognitive scientist who studies behavioural decision theory, perception of risk
and probability, statistical learning theory, and psychological semantics. He has studied mathematics,
philosophy, and psychology at the University of Belgrade, and graduated from the Department of
Psychology. He began his PhD studies at the Doctoral Program in Cognition and Perception, Department
of Psychology, New York University, USA, while defending a doctoral thesis entitled Rationality of
Cognition: A Meta-Theoretical and Methodological Analysis of Formal Cognitive Theories at the Faculty of
Philosophy, University of Belgrade, in 2013. Goran has a classic academic training in experimental
psychology, but his current work focuses mainly on the development of mathematical models of
cognition, and the theory and methodology of behavioural sciences.
He organised and managed the fi rst research on Internet usage and attitudes towards information
technologies in Serbia and the region of SE Europe, while managing the research programme of the
Center for Research on Information Technologies (CePIT) of the Belgrade Open School (2002–2005), the
foundation of which he initiated and supported. He edited and co‑authored several books on Internet
Behaviour, attitudes towards the Internet, and the development of the Information Society. He managed
several research projects on Internet Governance in cooperation with DiploFoundation (2002–2014) and
also works as an independent consultant in applied cognitive science and da
14