Torben Braüner (Roskilde) gives a talk at the MCMP Colloquium (17 January, 2013) titled "Hybrid-Logical Proof Theory: With an Application to False-Belief Tasks". Abstract: Hybrid logic is an extension of ordinary modal logic which allows explicit reference to individual points in a model (where the points represent times, possible worlds, states in a computer, or something else). This additional expressive power is useful for many applications, for example when reasoning about time one often wants to formulate a series of statements about what happens at specific times. There is little consensus about proof-theory for ordinary modal logic. Many modal-logical proof systems lack important properties and the relationships between proof systems for different modal logics are often unclear. In my talk I will demonstrate that these deficiencies are remedied by hybrid-logical proof-theory. In my talk I first give a brief introduction to hybrid logic and its origin in Arthur Prior's temporal logic. I then describe essential proof-theoretical results for natural deduction formulations of hybrid logic. Finally, I show how a proof system for hybrid logic can be used to formalize what are called false-belief tasks in cognitive psychology.
Grigory K. Olkhovikov (Ural Federal University Yekaterinburg) gives a talk at the MCMP Colloquium (25 April, 2013) titled "On flattening rules in natural deduction calculus for intuitionistic propositional logic". Abstract: Standard versions of natural deduction calculi consist of so called ‘flat’ rules that either discharge some formulas as their assumptions or discharge no assumptions at all. However, non-flat, or ‘higher-order’ rules discharging inferences rather than single formulas arise naturally within the realization of Lorenzen’s inversion principle in the framework of natural deduction. For the connectives which are taken as basic in the standard systems of propositional logic, these higher-order rules can be equivalently replaced with flat ones. Building on our joint work with Prof. P. Schroeder-Heister, we show that this is not the case with every connective of intuitionistic logic, the connective $c(A,B,C) = (A \to B)\vee C$ being our main counterexample. We also show that the dual question must be answered in the negative, too, that is to say, that existence of a system of flat elimination rules for a connective of intuitionistic logic does not guarantee existence of a system of flat introduction rules.
Chris Fermüller (Vienna) gives a talk at the MCMP Colloquium (2 May, 2013) titled "Semantic games and hypersequents: a case study in many valued reasoning". Abstract: For a quite a while it had been an open problem whether there is an analytic (cut-free) calculus for infinite valued Lukasiewicz logic, one of threefundamental many valued logics that lie at the centre of interest in contemporary mathematical fuzzy logic. The hypersequent calculus HL presented by Metcalfe, Gabbay, and Olivetti in 2004/5 settled the question positively; but HL did not fit well into the family of sequent and hypersequent systems for related nonclassical logics. In particular it remained unclear in what sense HL provides an analysis of logical reasoning in a many valued context. On the other hand, already in the 1970s Robin Giles had shown that a straightforward dialogue game, combined with a specific way to calculate expected losses associated with bets on the results on `dispersive experiments' leads to a characterisation of Lukasiewizc logic. We illustrate how these seemingly unrelated results fit together: the logical rules of HL naturally emerge from a systematic search for winning strategies in Giles's game. This amounts to a rather tangible interpretation of hypersequents that can be extended to other logics as well.
Moritz Schulz (Barcelona) gives a talk at the MCMP Colloquium (7 February, 2013) titled "Modus Ponens on the Restrictor View". Abstract: Recently, Kolodny & MacFarlane (2010) have proposed a new counterexample to modus ponens, which bears interesting relations to the classic counterexample by McGee (1985). By way of resolving the issue, I will see how the potential counterexamples can be analysed on the restrictor view of conditionals. The proposed resolution saves modus ponens by denying that the alleged counterexamples are proper instances of modus ponens. However, this solution raises the question whether there are any genuine instances of modus ponens on this view. To handle this problem, I will focus on the semantics of bare conditionals and develop a framework in which the validity of modus ponens can be addressed (and affirmed).
Jakub Szymanik (Amsterdam) gives a talk at the MCMP Colloquium (12 June, 2013) titled "From Logic to Behavior". Abstract: In this talk I will explore the applicability of modern logic and computation theory in cognitive science. I will show how logic can be used to build cognitive models in order to explain and predict human behavior. I will also illustrate the use of logical and computational toolboxes to evaluate (not necessarily logical) cognitive models along the following dimensions: (i) logical relationships, such as essential incompatibility or essential identity; (ii) explanatory power; (iii) computational plausibility. I will argue that logic is a general tool suited for cognitive modeling, and its role in psychology need not be restricted to the psychology of reasoning. Taking Marr's distinctions seriously I will also discuss how logical studies can improve our understanding of cognition by proposing new methodological perspectives in psychology. I will illustrate my general claims with examples of the successful research on the intersection of logic and cognitive science. I will mostly talk about two research projects I have been recently involved in: computational semantics for generalized quantifiers in natural language and logical models for higher order social cognition. The major focus will be computational complexity and its interplay with "difficulty" as experienced by subjects in cognitive science.
Alexandra Zinke (Konstanz) gives a talk at the MCMP Colloquium (24 January, 2013) titled "Interpretational Logical Truth: The Problem of Admissible Interpretations". Abstract: According to the interpretational definition of logical truth a sentence is logically true iff it is true under all interpretations of the non-logical terms. The most prominent problem of the interpretational definition is the problem of demarcating the logical from the non-logical terms. I argue that it does not suffice to only exclude those interpretations from the admissible ones that reinterpret the logical constants. There are further restrictions on admissible interpretations we must impose in order to secure that there are at least some logical truths. Once it is seen that we must impose non-trivial, semantical restrictions on admissible interpretations anyway, the question arises why we should not also accept even further restrictions. I formulate restrictions which would lead to the consequence that all analytical sentences come out as logically true and argue that these restrictions are of the same character as those we already subscribe to. Imposing only some of the restrictions seems arbitrary. The real challenge for proponents of the interpretational definition is thus not just the problem of demarcating the logical from the non-logical terms, but the more general problem of demarcating the admissible from the inadmissible interpretations.
Diderik Batens (Ghent) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Transitory and Permanent Applications of Paraconsistency". Abstract: The advent of paraconsistency offers an excellent opportunity to unveil past prejudices. These do not only concern the truth or sensibility of inconsistencies, but many aspects of the nature of logic(s). My aim will be to raise questions (and possibly arrive at insights) on such topics as the following: methodological versus descriptive application contexts of logics, logical pluralism (in those application contexts), arguments for considering a specific application as suitable, truth-functionality of logical operators, sensibility of (certain uses of) classical negation, fallibilism in logic.
Marie Duzi (Technical University Ostrava) gives a talk at the MCMP Colloquium (15 May, 2014) titled "A plea for beta-reduction by value". Abstract: This paper solves, in a logically rigorous manner, a problem discussed in a 2004 paper by Stephen Neale and originally advanced as a counterexample to Chomsky’s theory of binding. The example I will focus on is the following. John loves his wife. So does Peter. Therefore, John and Peter share a property. Only which one? There are two options. (1) Loving John’s wife. Then John and Peter love the same woman (and there is trouble on the horizon). (2) Loving one’s own wife. Then, unless they are married to the same woman, John loves one woman and Peter loves another woman (and both are exemplary husbands). On the strict reading of “John loves his wife, and so does Peter” property (1) is the one they share. On the sloppy reading, property (2) is the one they share. The dialectics of this contribution is to move from linguistics through logic to semantics. An issue originally bearing on binding in linguistics is used to make a point about -conversion in the typed ß-calculus. Since the properties loving John’s wife and loving one’s own wife as attributed to John are distinct, there is room for oscillation between the sloppy and the strict reading. But once we feed the formal renditions of attribution of these two properties to John into the widespread ß-calculus for logical analysis, a logical problem arises. The problem is this. Their respective ß-redexes are distinct, for sure, but they share the same ß-contractum. This contractum corresponds to the strict reading. So ß-conversion predicts, erroneously, that two properties applied to John ß-reduce to one. The result is that the sloppy reading gets squeezed out. ß-reduction blots out the anaphoric character of ‘his wife’, while the resulting contractum is itself ß-expandable back into both the strict and the sloppy reading. Information is lost in transformation. The information lost when performing ß-reduction on the formal counterparts of “John loves his wife” is whether the property that was applied was (1) or (2), since both can be reconstructed from the contractum, though neither in particular. The sentence “John loves his wife, and so does Peter” ostensibly shows that the ß-calculus is too crude an analytical tool for at least one kind of perfectly natural use of indexicals. The problematic reduction and its solution will both be discussed within the framework of Tichý’s Transparent Intensional Logic. Tichý’s TIL was developed simultaneously with Montague’s Intensional Logic. The technical tools of the two disambiguations of the analysandum will be familiar from Montague’s intensional logic, with two important exceptions. One is that we ß-bind separate variables w1,…,wn ranging over possible worlds and t1,…,tn ranging over times. This dual binding is tantamount to explicit intensionalization and temporalization. The other exception is that functional application is the logic both of extensionalization of intensions (functions from possible worlds) and of predication. I will demonstrates that, and how, the ß-calculus is up for the challenge, provided a rule of ß-conversion by value is adopted. The logical contribution of the paper is a generally valid form of ß-reduction by value rather than by name. The philosophical application of ß-reduction by value to a context containing anaphora is another contribution of this paper. The standard approach to VP ellipsis based on ß-abstracts and variable binding can, thus, be safely upheld. Our solution has the following features. First, unambiguous terms and expressions with a pragmatically incomplete meaning, like ‘his wife’ or “So does Peter”, are analyzed in all contexts as expressing an open construction containing at least one free variable with a fixed domain of quantification. Second, the solution uses ß-conversion by value, rather than conversion by name. The generally valid rule ...
Andreas Kapsner (MCMP/LMU) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Why designate gluts?". Abstract: In this talk, I want to explore the following idea: Truth value gluts should be allowed in the semantics of logical systems, as they are in many non-classical systems. However, unlike what is standard in such systems, these gluts should be treated as undesignated values. I shall give my reasons for taking this to be a view worth exploring and discuss its effects on such topics as dialetheism, paraconsistency and relevance. On the whole, it will turn out to be a surprisingly attractive view that deals well with epistemic inconsistencies and semantic paradoxes. Some of the greatest difficulties arise in the attempt to account for interesting inconsistent scientific and mathematical theories; this, then, will be the touchstone for the proposed view.
Francesco Berto (Amsterdam) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Inconsistent Thinking, Fast and Slow". Abstract: This plays on Kahneman’s Thinking Fast and Slow. We implement two reasoning systems: our Slow system is logical-rule-based. Our Fast system is associative, context-sensitive, and integrates what we conceive via background information Slow inconsistent thinking may rely on paraconsistent logical rules, but I focus on Fast inconsistent thinking. I approach our Fast-conceiving inconsistencies in terms of ceteris paribus intentional operators: variably restricted quantifiers on possible and impossible worlds. The explicit content of an inconsistent conception is similar to a ceteris paribus relevant conditional antecedent. I discuss how such operators invalidate logical closure for conceivability, and how similarity works when impossible worlds are around.
Maarten McKubre-Jordens (Canterbury) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Doing mathematics paraconsistently. A manifesto". Abstract: In this talk, we outline several motivations for conducting mathematics–in the style of the working mathematician–without dependence on assumptions of non-contradiction. The story involves a short analysis of theorem and counterexample, what it is to reason paraconsistently within mathematics, and takes note of some non-traditional obstacles and attempts to resolve them. In part, this will provide motivation to the mathematician to think outside the box when approaching surprising conclusions within the usual framework. Then, as we delve into the mathematics, we survey some recent results in elementary analysis when performed paraconsistently, and outline some conjectures for future research. This talk is of interest both to provide reasons and techniques for paraconsistent mathematics, and to show how rich a picture can be painted without recourse to assumptions of non-contradiction.
Graham Priest (CUNY and St Andrews) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Models of Paraconsistent Set Theory". Abstract: Any adequate paraconsistent set theory must be able to validate at least a major part of the standard results of orthodox set theory. One way to achieve this is to take the universe or universes of sets to be such as to validate not only the naïve principles, but also all the theorems of Zermelo Fraenkel set theory. In this talk I will discuss various constructions of models of set theory which do just this.
Itala M. Loffredo D'Ottaviano (Campinas) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Can a paraconsistent differential calculus extend the classical differential calculus?". Abstract:In 2000, da Costa proposes the construction of a paraconsistent differential calculus, whose language is the language L of his known paraconsistent logic C1, extended to the language of his paraconsistent set theory CHU1, introduced in 1986. We have studied and improved the calculus proposed by da Costa, having obtained extensions of several fundamental theorems of the classical differential calculus. From the introduction of the concept of paraconsistent super-structure X over a set X of atoms of CHU1 and of the concept of monomorphism between paraconsistent super-structures, we will present a Transference Theorem that “translates” the classical differential calculus into da Costa’s paraconsistent calculus.
Bryson Brown (Lethbridge) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "On the Preservation of Reliability". Abstract: …all models are wrong, but some are useful. (G E.P. Box and N. R. Draper, 1987). C.S. Peirce examined several broad methods for arriving at beliefs in “On the Fixation of Belief”; the central theme of his essay is the importance of having a method that leads to stable agreement amongst the members of a society. Peirce argues that the ‘scientific method meets this standard, generalizing Hobbes’ observation that Harvey’s hypothesis of the circulation of the blood is an important example of a once-controversial view that came to be accepted even by those who initially rejected it, and a demonstration of the special epistemic success of science, in contrast with other forms of inquiry. But stable agreements are sometimes overturned. The brilliant, wide-ranging successes of classical physics made the basic principles of Newton’s theory laws of nature in the eyes of physicists, philosophers and the educated public. Yet they have been superseded by new principles. This pattern of success followed by failure and replacement is the key premise of Laudan’s pessimistic induction argument against scientific realism. Yet we have confidence, often justified confidence, in many scientific inferences, including inferences that begin with models and theoretical principles known to be false. Though the equations used and descriptions of actual systems our models are applied to are false, the results of the calculations (even when only approximate) are considered reliable for many purposes, and with good reason. Thus models provided by orbital mechanics are used to place probes in desired orbits and even land on other planets, while atmospheric GCMs are used to estimate large-scale climate changes likely to occur given various scenarios for human GHG emissions. Neither the equations used nor the descriptions of the systems such models include are true; the results of the calculations (even assuming the calculations are exact) cannot reasonably be taken to be true, either. Yet we do rely on them, and with good reason. Many measurable physical quantities that models allow us to calculate values for turn out to be very close to the results of actual observations under a wide range of specifiable conditions. This paper develops a pragmatic view of theories, models and the inferences we use our models and theories to make. The preservation of reliability allows for the use of incompatible theories and principles in our inferences, along with contextually-determined levels of acceptable approximation, and conceptual shifts in how measurement results are interpreted in the light of the different principles relied on at different points. But it is also compatible with a modest scientific realism: the criteria by which we decide when a theory can be relied on generally include measurable parameters whose values, even though our understanding of them remains imperfect, reliably indicate when and to what extent a given theory is reliable and constitutes reliably settled science in a sense that Peirce might have found satisfying even without an accepted background theory in which we can explain both that reliability and its limits.
Otávio Bueno (Miami) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "Inconsistent scientific Theories: A Framework". Abstract: Four important issues need to be considered when inconsistent scientific theories are under discussion: (1) To begin with, are there–and can there be–such things as inconsistent scientific theories? On standard conceptions of the structure of scientific theories, such as the semantic and the syntactic approaches (Suppe [1989], and van Fraassen [1980]), there is simply no room for such theories, given the classical underpinnings of these views. In fact, both the syntactic and the semantic approaches assume that the underlying logic is classical, and as is well known, in classical logic everything follows from an inconsistent theory. Despite this fact, it seems undeniable that inconsistent scientific theories have been entertained–or, at least, stumbled upon–throughout the history of science. So, it looks as though we need to make room for them. (2) But once some room is made for inconsistent scientific theories, how exactly should they be accommodated? In particular, it seems crucial that we are able to understand the styles of reasoning that involve inconsistencies; that is, the various ways in which scientists and mathematicians reason from inconsistent assumptions without deriving everything from them. It is tempting, of course, to adopt a paraconsistent logic to model some of the reasoning styles in question (see da Costa and French [2003], da Costa, Krause, and Bueno [2007], and da Costa, Bueno, and French [1998]). This is certainly a possibility. However, actual scientific practice is not typically done using paraconsistent logic. And if our goal is to understand that practice in its own terms, rather than to produce a parallel discourse about that practice that somehow justifies the adequacy of the latter by invoking tools that are foreign to it, an entirely different strategy is called for. (3) What are the sources of the inconsistencies in scientific theories? Do such inconsistencies emerge from empirical reasons, from conceptual reasons, from both, or by sheer mistake? By identifying the various sources in question, we can handle and assess the significance of the inconsistencies in a better way. Perhaps some inconsistencies are more important, troublesome, or heuristically fruitful than others—and this should be part of their assessment. (4) Several scientific theories become inconsistent due to the mathematical framework they assume. For example, the theories may refer to infinitesimals, as the latter were originally formulated in the early versions of the calculus (see Robinson [1974] and Bell [2005]), the theories may invoke Dirac’s delta function (Dirac [1958]), or some other arguably inconsistent mathematical framework. The issue then arises as to how we should deal with inconsistent applied mathematical theories. What is the status of these theories? Which commitments do they bring? Are we committed to the existence of inconsistent objects if we use such theories in explaining the phenomena? Can an inconsistent scientific theory ever be indispensable? Questions of this sort need to be answered so that we can make sense of the role of inconsistent theories in applications. (For an insightful discussion, see Colyvan [2009].) In this paper, I examine these four issues, and develop a framework–in terms of partial mappings (Bueno, French and Ladyman [2002], and Bueno [2006]), and the inferential conception of the application of mathematics (Bueno and Colyvan [forthcoming])–to represent and interpret inconsistent theories in science. Along the way, I illustrate how the framework can be used to make sense of various allegedly inconsistent theories, from the early formulations of the calculus through Dirac’s delta function and Bohr’s atomic model (Bohr [1913]).
Holger Andreas (MCMP/LMU) gives a talk at the Conference on Paraconsistent Reasoning in Science and Mathematics (11-13 June, 2014) titled "A Paraconsistent Generalization of Carnap's Logic of Theoretical Terms".
Michael De (Konstanz) gives a talk at the MCMP Colloquium (18 December, 2014) titled "Negation as modality". Abstract: In a recent paper, Francesco Berto (forthcoming) defends an account of negation as a modality. According to that account, the negation of a sentence is true at a world x just in case the sentence is not true in any world compatible with x. Alternatively, the negation of a sentence is true at x just in case all the worlds at which the sentence is true are incompatible with x. Compatibility is taken to be the key notion in the account, and what minimal properties a negation has comes down to which minimal conditions compatibility satisfies. In this talk, we first point out what we take to be serious problems for this modal account. Second, we propose an alternative, non-modal, understanding of negation.
Kevin Zollman (CMU) gives a lecture (first session) at the Summer School on Mathematical Philosophy for Female Students (26 July - 1 Agusut, 2015) titled "Introduction to Networks". Abstract: Social networks have become a central feature of the scientific study of social behavior and have been imported into philosophical discussions – like ethics, epistemology, and the philosophy of science – where social behavior is important. In ethics, scholars have asked what effect social networks might have on the evolution and maintenance of different ethical norms like fairness, cooperation, and altruism. As epistemologists have begun to take the social nature of knowledge more seriously, they too have begun to ask about how networks might influence the way knowledge is generated and transmitted. Finally, in philosophy of science scholars have asked how incorporating networks might change scientific theory, and how networks of scientists might come to learn about the world. This course will introduce students to the basics of social networks, some of the uses of social networks in philosophy, and how to understand and analyze networks for original research. Because some of the analysis of social networks requires the use of computer simulation, this course will also teach students how to use the computational tool NetLogo for analyzing networks. No prior knowledge of programing is expected.
Isidora Stojanovic (Jean Nicod Institute Paris) gives a lecture (first session) at the Summer School on Mathematical Philosophy for Female Students (26 July - 1 Agusut, 2015) titled "Context-dependence and the Semantics-Pragmatics Interface". Abstract: Context-dependence is ubiquitous not only in language, but in cognition and action more generally. In the first part of the course, we shall introduce two basic tools from formal semantics (and pragmatics) that help understanding how the truth of a statement may depend on the context: on the one hand, the notion of presupposition, and on the other, possible world semantics, with its extensions and applications to modality, tense and doxastic expressions. In the second part, we shall use these tools to address a range of issues at the semantics-pragmatics interface, such as the relationship between alethic, deontic and epistemic modals, or the context-sensitivity of knowledge attributions and belief reports.
Julia Staffel (Washington University in St. Louis) gives a lecture (first session) at the Summer School on Mathematical Philosophy for Female Students (26 July - 1 Agusut, 2015) titled "Attitudes in Epistemology: Belief vs. Credence". Abstract: This lecture stream is intended to be an introduction to some central topics in formal epistemology. Formal epistemology is a relatively recent branch of epistemology, which uses formal tools such as logic and probability theory in order to answer questions about the nature of rational belief. An important feature that distinguishes formal epistemology from traditional epistemology is not just its use of formal tools, but also its understanding of the nature of belief. Traditional epistemology tends to focus almost exclusively on what is called ‘outright belief’, where the options considered are just belief, disbelief, or suspension of judgment. By contrast, it is widely accepted among formal epistemologists that this conception of belief is too coarse-grained to capture the rich nature of our doxastic attitudes. They posit that humans also have degrees of belief, or credences, which can take any value between full certainty that something is true, and certainty that it is false. The shift in focus towards degrees of belief has generated a rich research program, parts of which integrate with issues in traditional epistemology, and parts of which are specific to the debate about degrees of belief. Important questions in the field are for example: How are degrees of belief related to outright beliefs? What constraints are there on rational degrees of belief, and how can they be defended? How can we adequately represent degrees of belief in a formal framework? How do ideal epistemological norms bear on what non-ideal agents like us ought to believe? The results of these debates are relevant for many areas of philosophy besides epistemology, such as philosophy of mind, philosophy of language, and practical reasoning.