ⓘ Common knowledge, logic. Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all t ..


ⓘ Common knowledge (logic)

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.

The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention 1969. The sociologist Morris Friedell defined common knowledge in a 1969 paper. It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann 1976. Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s. There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway.

The philosopher Stephen Schiffer, in his 1972 book Meaning, independently developed a notion he called "mutual knowledge" which functions quite similarly to Lewiss and Friedels 1969 "common knowledge".


1.1. Example Solution

The answer is that, on the k th dawn after the announcement, all the blue-eyed people will leave the island.


1.2. Example Proof

The solution can be seen with an inductive argument. If k = 1 that is, there is exactly one blue-eyed person, the person will recognize that he alone has blue eyes by seeing only green eyes in the others and leave at the first dawn. If k = 2, no one will leave at the first dawn. The two blue-eyed people, seeing only one person with blue eyes, and that no one left on the 1st dawn and thus that k > 1, will leave on the second dawn. Inductively, it can be reasoned that no one will leave at the first k − 1 dawns if and only if there are at least k blue-eyed people. Those with blue eyes, seeing k − 1 blue-eyed people among the others and knowing there must be at least k, will reason that they must have blue eyes and leave.

Whats most interesting about this scenario is that, for k > 1, the outsider is only telling the island citizens what they already know: that there are blue-eyed people among them. However, before this fact is announced, the fact is not common knowledge.

For k = 2, it is merely "first-order" knowledge. Each blue-eyed person knows that there is someone with blue eyes, but each blue eyed person does not know that the other blue-eyed person has this same knowledge.

For k = 3, it is "second order" knowledge. Each blue-eyed person knows that a second blue-eyed person knows that a third person has blue eyes, but no one knows that there is a third blue-eyed person with that knowledge, until the outsider makes his statement.

In general: For k > 1, it is "k − 1th order" knowledge. Each blue-eyed person knows that a second blue-eyed person knows that a third blue-eyed person knows that. repeat for a total of k − 1 levels a k th person has blue eyes, but no one knows that there is a k th" blue-eyed person with that knowledge, until the outsider makes his statement. The notion of common knowledge therefore has a palpable effect. Knowing that everyone knows does make a difference. When the outsiders public announcement a fact already known to all becomes common knowledge, the blue-eyed people on this island eventually deduce their status, and leave.


2.1. Formalization Modal logic syntactic characterization

Common knowledge can be given a logical definition in multi-modal logic systems in which the modal operators are interpreted epistemically. At the propositional level, such systems are extensions of propositional logic. The extension consists of the introduction of a group G of agents, and of n modal operators K i with i = 1., n with the intended meaning that "agent i knows." Thus K i φ {\displaystyle \varphi } where φ {\displaystyle \varphi } is a formula of the calculus is read "agent i knows φ {\displaystyle \varphi }." We can define an operator E G with the intended meaning of "everyone in group G knows" by defining it with the axiom

E G φ ⇔ ⋀ i ∈ G K i φ, {\displaystyle E_{G}\varphi \Leftrightarrow \bigwedge _{i\in G}K_{i}\varphi,}

By abbreviating the expression E G E G n − 1 φ {\displaystyle E_{G}E_{G}^{n-1}\varphi } with E G n φ {\displaystyle E_{G}^{n}\varphi } and defining E G 0 φ = φ {\displaystyle E_{G}^{0}\varphi =\varphi }, we could then define common knowledge with the axiom

C φ ⇔ ⋀ i = 0 ∞ E i φ {\displaystyle C\varphi \Leftrightarrow \bigwedge _{i=0}^{\infty }E^{i}\varphi }

There is however a complication. The languages of epistemic logic are usually finitary, whereas the axiom above defines common knowledge as an infinite conjunction of formulas, hence not a well-formed formula of the language. To overcome this difficulty, a fixed-point definition of common knowledge can be given. Intuitively, common knowledge is thought of as the fixed point of the "equation" C G φ = φ ∧ E G C G φ {\displaystyle C_{G}\varphi =\varphi \wedge E_{G}C_{G}\varphi}. In this way, it is possible to find a formula ψ {\displaystyle \psi } implying E G φ ∧ C G φ {\displaystyle E_{G}\varphi \wedge C_{G}\varphi} from which, in the limit, we can infer common knowledge of φ {\displaystyle \varphi }.

This syntactic characterization is given semantic content through so-called Kripke structures. A Kripke structure is given by i a set of states or possible worlds S, ii n accessibility relations R 1, …, R n {\displaystyle R_{1},\dots,R_{n}}, defined on S × S {\displaystyle S\times S}, intuitively representing what states agent i considers possible from any given state, and iii a valuation function π {\displaystyle \pi } assigning a truth value, in each state, to each primitive proposition in the language. The semantics for the knowledge operator is given by stipulating that K i φ {\displaystyle K_{i}\varphi } is true at state s iff φ {\displaystyle \varphi } is true at all states t such that s, t ∈ R i {\displaystyle s,t\in R_{i}}. The semantics for the common knowledge operator, then, is given by taking, for each group of agents G, the reflexive and transitive closure of the R i {\displaystyle R_{i}}, for all agents i in G, call such a relation R G {\displaystyle R_{G}}, and stipulating that C G φ {\displaystyle C_{G}\varphi } is true at state s iff φ {\displaystyle \varphi } is true at all states t such that s, t ∈ R G {\displaystyle s,t\in R_{G}}.


2.2. Formalization Set theoretic semantic characterization

Alternatively yet equivalently common knowledge can be formalized using set theory this was the path taken by the Nobel laureate Robert Aumann in his seminal 1976 paper. We will start with a set of states S. We can then define an event E as a subset of the set of states S. For each agent i, define a partition on S, P i. This partition represents the state of knowledge of an agent in a state. In state s, agent i knows that one of the states in P i s obtains, but not which one. Here P i s denotes the unique element of P i containing s. Note that this model excludes cases in which agents know things that are not true)

We can now define a knowledge function K in the following way:

K i e = { s ∈ S ∣ P i s ⊂ e } {\displaystyle K_{i}e=\{s\in S\mid P_{i}s\subset e\}}

That is, K i e is the set of states where the agent will know that event e obtains. It is a subset of e.

Similar to the modal logic formulation above, we can define an operator for the idea that "everyone knows e ".

e = ⋂ i K i e {\displaystyle Ee=\bigcap _{i}K_{i}e}

As with the modal operator, we will iterate the E function, E 1 e = e {\displaystyle E^{1}e=Ee} and E n + 1 e = E n e) {\displaystyle E^{n+1}e=EE^{n}e)}. Using this we can then define a common knowledge function,

C e = ⋂ n = 1 ∞ E n e. {\displaystyle Ce=\bigcap _{n=1}^{\infty }E^{n}e.}

The equivalence with the syntactic approach sketched above can easily be seen: consider an Aumann structure as the one just defined. We can define a correspondent Kripke structure by taking i the same space S, ii accessibility relations R i {\displaystyle R_{i}} that define the equivalence classes corresponding to the partitions P i {\displaystyle P_{i}}, and iii a valuation function such that it yields value true to the primitive proposition p in all and only the states s such that s ∈ E p {\displaystyle s\in E^{p}}, where E p {\displaystyle E^{p}} is the event of the Aumann structure corresponding to the primitive proposition p. It is not difficult to see that the common knowledge accessibility function R G {\displaystyle R_{G}} defined in the previous section corresponds to the finest common coarsening of the partitions P i {\displaystyle P_{i}} for all i ∈ G {\displaystyle i\in G}, which is the finitary characterization of common knowledge also given by Aumann in the 1976 article.


3. Applications

Common knowledge was used by David Lewis in his pioneering game-theoretical account of convention. In this sense, common knowledge is a concept still central for linguists and philosophers of language see Clark 1996 maintaining a Lewisian, conventionalist account of language.

Robert Aumann introduced a set theoretical formulation of common knowledge theoretically equivalent to the one given above and proved the so-called agreement theorem through which: if two agents have common prior probability over a certain event, and the posterior probabilities are common knowledge, then such posterior probabilities are equal. A result based on the agreement theorem and proven by Milgrom shows that, given certain conditions on market efficiency and information, speculative trade is impossible.

The concept of common knowledge is central in game theory. For several years it has been thought that the assumption of common knowledge of rationality for the players in the game was fundamental. It turns out Aumann and Brandenburger 1995 that, in 2-player games, common knowledge of rationality is not needed as an epistemic condition for Nash equilibrium strategies.

Computer scientists use languages incorporating epistemic logics and common knowledge to reason about distributed systems. Such systems can be based on logics more complicated than simple propositional epistemic logic, see Wooldridge Reasoning about Artificial Agents, 2000 in which he uses a first-order logic incorporating epistemic and temporal operators or van der Hoek et al. "Alternating Time Epistemic Logic".

In his 2007 book, The Stuff of Thought: Language as a Window into Human Nature, Steven Pinker uses the notion of common knowledge to analyze the kind of indirect speech involved in innuendoes.

  • Chappelle s Show. Thus it may be regarded as common knowledge in the United States. Common knowledge logic Common sense Consensus reality Conventional wisdom
  • Common Logic CL is a framework for a family of logic languages, based on first - order logic intended to facilitate the exchange and transmission of knowledge
  • Epistemic modal logic is a subfield of modal logic that is concerned with reasoning about knowledge While epistemology has a long philosophical tradition
  • forms of logic to deduce new facts or highlight inconsistencies. The term knowledge - base was coined to distinguish this form of knowledge store from
  • systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning
  • formulas. Logic programming languages that include this extension have the knowledge representation capabilities of a non - monotonic logic In ASP and
  • agreement on what logic actually is has remained elusive, although the field of universal logic has studied the common structure of logics Logic is generally
  • The history of logic as a subject has been characterised by many disputes over what the topic deals with, and the main article Logic has as a result
  • Description logics DL are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive
  • Mutual knowledge is a fundamental concept about information in game theory, epistemic logic and epistemology. An event is mutual knowledge if all agents
  • monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. A monotonic logic cannot handle various reasoning tasks such
  • Combinational logic - - Combinatory logic - - Combs method - - Common knowledge logic - - Commutativity of conjunction - - Completeness logic - - Composition
  • logic program which entails all the positive and none of the negative examples. Schema: positive examples negative examples background knowledge

Users also searched:

common knowledge examples, game theory mutual knowledge vs common knowledge,