DiscoverMCMP – Epistemology
MCMP – Epistemology
Claim Ownership

MCMP – Epistemology

Author: MCMP Team

Subscribed: 64Played: 400
Share

Description

Mathematical Philosophy - the application of logical and mathematical methods in philosophy - is about to experience a tremendous boom in various areas of philosophy. At the new Munich Center for Mathematical Philosophy, which is funded mostly by the German Alexander von Humboldt Foundation, philosophical research will be carried out mathematically, that is, by means of methods that are very close to those used by the scientists.
The purpose of doing philosophy in this way is not to reduce philosophy to mathematics or to natural science in any sense; rather mathematics is applied in order to derive philosophical conclusions from philosophical assumptions, just as in physics mathematical methods are used to derive physical predictions from physical laws.
Nor is the idea of mathematical philosophy to dismiss any of the ancient questions of philosophy as irrelevant or senseless: although modern mathematical philosophy owes a lot to the heritage of the Vienna and Berlin Circles of Logical Empiricism, unlike the Logical Empiricists most mathematical philosophers today are driven by the same traditional questions about truth, knowledge, rationality, the nature of objects, morality, and the like, which were driving the classical philosophers, and no area of traditional philosophy is taken to be intrinsically misguided or confused anymore. It is just that some of the traditional questions of philosophy can be made much clearer and much more precise in logical-mathematical terms, for some of these questions answers can be given by means of mathematical proofs or models, and on this basis new and more concrete philosophical questions emerge. This may then lead to philosophical progress, and ultimately that is the goal of the Center.
46 Episodes
Reverse
Catarina Dutilh Novaes (Groningen) gives a talk at the MCMP Colloquium (24 January, 2013) titled "Reasoning biases and non-monotonic logics". Abstract: Stenning and van Lambalgen (2008) have argued that much of what is described in the psychology of reasoning literature as `reasoning biases' can more accurately be accounted for by means of the concept of defeasible, non-monotonic reasoning. They rely on the AI framework of closed-world reasoning as the formal background for their investigations. In my talk, I give continuation to the project of reassessing reasoning biases from a non-monotonic point of view, but use instead the semantic approach to non-monotonic logics presented in Shoham (1987), known as preferential semantics. I focus in particular on the so-called belief-bias effect and the Modus Ponens-Modus Tollens asymmetry. The ease with which these reasoning patterns are accounted for from a defeasible reasoning point of view lends support to the claim that (untrained) human reasoning has a strong component of defeasibility. I conclude with some remarks on Marr’s ‘three levels of analysis’ and the role of formal frameworks for the empirical investigation of human reasoning.
Jake Chandler (Leuven) gives a talk at the MCMP Colloquium (6 February, 2013) titled "Reasons to Believe and Reasons to not." Abstract: The provision of a precise, formal treatment of the relation of evidential relevance–i.e. of providing a reason to hold or to withhold a belief–has arguably constituted the principal selling point of Bayesian modeling in contemporary epistemology and philosophy of science. By the same token, the lack of an analogous proposal in so-called AGM belief revision theory, a powerful and elegant qualitative alternative to the Bayesian framework, is likely to have significantly contributed to its relatively marginal status in the philosophical mainstream. In the present talk, I sketch out a corrective to this deficiency, offering a suggestion, within the context of belief revision theory, concerning the relation between beliefs about evidential relevance and commitments to certain policies of belief change. Aside from shedding light on the status of various important evidential ‘transmission’ principles, this proposal also constitutes a promising basis for the elaboration of a logic of so-called epistemic defeaters.
Benjamin Bewersdorf (Konstanz) gives a talk at the MCMP Colloquium (23 January, 2013) titled "Garber and Field Conditionalization ". Abstract: The most influential formal account on rational belief change is Jeffrey conditionalization. Given a plausible assumption on the role of experiences for belief change, Jeffrey conditionalization turns out to be incomplete. Field tried to complete Jeffrey conditionalization by adding an input law to it. But, as Garber has pointed out, the resulting theory has a serious weakness. In the following, I will both generalize Garber's objection against Field's account and show how Field's account can be modified to avoid it.
Veli Mitova (Vienna) gives a talk at the MCMP Colloquium (6 June, 2013) titled "How to be a truthy psychologist about evidence". Abstract: I defend the view that the only things that count as evidence for belief are factive tokens of psychological states. I first assume that the evidence for p can sometimes be a good reason to believe that p. I then argue, with some help from metaethics 101, that a reason is a beast of two burdens: it must be capable of being both a good reason and a motive. I then show that truthy psychologism is the only position that can honour The Beast of Two Burdens Thesis, without ruffling our pre-101 intuitions about good reasons, motives, and explanations.
Michael R. Waldmann (Göttingen) gives a talk at the MCMP Colloquium (23 April, 2014) titled "Structure Induction in Diagnostic Causal Reasoning". Abstract: Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner’s beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in two studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go “beyond the information given” and use the available data to make inferences on the (unobserved) causal, rather than on the (observed) data level.
Anna Leuschner (KIT) gives a talk at the MCMP Colloquium (9 April, 2014) titled "Epistemically Detrimental Dissent and the Milian Argument against the Freedom of Inquiry". Abstract: I'll present a joint work that I have been conducting with Justin Biddle. The idea of epistemically problematic dissent is counterintuitive at first glance; as Mill argues, even misguided dissent from a consensus position can be epistemically fruitful as it can lead to a deeper understanding of consensus positions. Yet, focusing on climate science we argue that dissent can be epistemically problematic when it leads to a distortion of risk assessment in mainstream science. I'll examine the conditions under which dissent in science is epistemically detrimental, provide empirical support for this finding, and conclude with a discussion on normative consequences of these findings by considering Philip Kitcher’s "Millian argument against the freedom of inquiry".
Vincenzo Crupi (Turin) gives a talk at the MCMP Colloquium (16 April, 2014) titled "Models of rationality and the psychology of reasoning (From is to ought, and back)". Abstract: Diagnoses of (ir)rationality often arise from the experimental investigation of human reasoning. Relying on joint work with Vittorio Girotto, I will suggest that such diagnoses can be disputed on various grounds, and provide a classification. I will then argue that much fruitful research done with classical experimental paradigms was triggered by normative concerns and yet fostered insight in properly psychological terms. My examples include the selection task, the conjunction fallacy, and so-called pseudodiagnosticity. Conclusion: normative considerations retain a constructive role for the psychology of reasoning, contrary to recent complaints in the literature.
Jason McKenzie (LSE) gives a talk at the MCMP Colloquium (30 April, 2014) titled "Epistemic Landscapes and Optimal Search". Abstract: In a paper from 2009, Michael Weisberg and Ryan Muldoon argue that there exist epistemic reasons for the division of cognitive labour. In particular, they claim that a heterogeneous population of agents, where people use a variety of socially response search rules, proves more capable at exploring an “epistemic landscape” than a homogenous population. We show, through a combination of analytic and simulation results, that this claim is not true, and identify why Weisberg and Muldoon obtained the results they did. We then show that, in the case of arguably more “realistic” landscapes — based on Kauffman’s NK-model of “tunably rugged” fitness landscapes — that social learning frequently provides no epistemic benefit whatsoever. Although there surely are good epistemic reasons for the division of cognitive labour, we conclude Weisberg and Muldoon did not show that “a polymorphic population of research strategies thus seems to be the optimal way to divide cognitive labor”.
Gregory Wheeler (MCMP) gives a talk at the MCMP Colloquium (25 June, 2014) titled "Fast, Frugal and Focused: When less information leads to better decisions". Abstract: People frequently do not abide by the total evidence norm of classical Bayesian rationality but instead use just a few items of information among the many available to them. Gerd Gigerenzer and colleagues have famously shown that decision-making with less information often leads to objectively better outcomes, which raises an intriguing normative question: if we could say precisely under what circumstances this "less is more" effect occurs, we conceivably could say when people should reason the Fast and Frugal way rather than the classical Bayesian way. In this talk I report on results from joint work with Konstantinos Katsikopoulos that resolves a puzzle in the mathematical psychology literature over attempts to to explain the conditions responsible for this "less is more" effect. What is more, there is a surprisingly deep connection between the "less is more" effect and coherentist justification. In short, the conditions that are good for coherentism are lousy for single-reason strategies, and vice versa.
Jon Williamson (Kent) gives a talk at the MCMP Colloquium (8 October, 2014) titled "The Principal Principle implies the Principle of Indifference". Abstract: I'll argue that David Lewis' Principal Principle implies a version of the Principle of Indifference. The same is true for similar principles which need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. One might try to avoid this conclusion by disavowing the link between conditional beliefs and conditional probabilities that is almost universally endorsed by Bayesians. I'll explain why this move offers no succour to the subjectivist.
Greg Gandenberger (Pittsburgh) gives a talk at the MCMP Colloquium (22 April, 2015) titled "New Responses to Some Purported Counterexamples to Likelihoodist Principles". Abstract: The Likelihood Principle is important because the frequentist statistical methods that are most commonly used in science violate it, while rival likelihoodist and Bayesian methods do not. It is supported by a variety of arguments, including several proofs from intuitively plausible axioms. It also faces many objections, including several purported counterexamples. In this talk, I provide new responses to four purported counterexamples to the Likelihood Principle and its near-corollary the Law of Likelihood that are not adequately addressed in the existing literature. I first respond to examples due to Fitelson and Titelbaum that I argue are adequately addressed by restricting the Law of Likelihood to mutually exclusive hypotheses. I then respond to two counterexamples from the statistical literature. My responses to these latter examples are novel in that they do not appeal to prior probabilities, which is important for attempts to use the Likelihood Principle to provide an argument for Bayesian approaches that does presuppose the permissibility of using prior probabilities in science.
Kevin Zollman (CMU) gives a talk at the Conference on Agent-Based Modeling in Philosophy (11-13 December, 2014) titled "The Formation of Epistemic Networks". Abstract: One important area of study for social epistemology is the social structure epistemic groups -- who communicates their knowledge with whom? Significant research has been done on better and worse communication networks, but less has been done on how a group comes to have one network or another. In this talk, I will present a number of results (some recent) from economics and philosophy about how individuals choose with whom to communicate. Understanding how individuals decide where to gain information can help us to design institutions that lead to epistemically more reliable groups.
Elke Brendel (Bonn) gives a talk at the MCMP Colloquium (17 June, 2015) titled "Disagreement and Epistemic Modality". Abstract: Intuitively, the truth conditions of sentences with epistemic modals, such as “It might be that p”, depend on what is epistemically known by a speaker or some contextually relevant group. That is why a contextualist account of epistemic modals seems to provide an adequate semantics for epistemic modal claims. However, contextualism has difficulties to account for the intuition of disagreement about epistemic modal claims: If A claims (according to his knowledge): “It might be that p” and B claims (according to her knowledge) “It cannot be that p”, A and B seem to disagree. They are not merely talking past each other. In my talk, I will first explore the notion of disagreement and present some necessary conditions for two parties disagreeing with each other. Second, I will critically examine the prospects of contextualist semantics as well as truth-relativist accounts in dealing with disagreement about epistemic modal claims. I will finally analyze arguments in favor of an invariantist theory of epistemic modality.
Ben Levinstein (Oxford) gives a talk at the MCMP Colloquium (6 May, 2015) titled "A Pragmatic Vindication of Epistemic Utility Theory". Abstract: Traditionally, probabilism and other norms on partial belief have been motivated from a pragmatic point of view. For instance, as Frank Ramsey long ago showed, if you're probabilistically incoherent, then you're subject to a set of bets each of which you consider fair but which are jointly guaranteed to result in a net loss. Since Joyce's seminal 1998 paper, some epistemologists have shifted course and have tried to establish norms on epistemic states without any recourse to practical rationality. I use a theorem from Schervish to bridge the gap between these two approaches. We can either take standard measures of accuracy to be formalizations of purely epistemic value, or we can generate them from what are at base practical foundations. Even if we opt for this latter approach, I show we can mostly cordon off the epistemic from the practical while ultimately grounding epistemic norms in purely practical rationality.
Denis Bonnay (Paris Quest/IHPST) gives a talk at the MCMP Colloquium (30 April, 2015) titled "An Axiomatization of Individual and Social Updates". Abstract: In this talk, I will consider update rules, which an agent may follow in order to update her subjective probabilities and take into account new information she receives. I will consider two different situations in which this may happen: (1) individual updates: when an agent learns the probability for a particular event to have a certain value. (2) social updates: when an agent learns the probability an other agent's gives to a particular event. Jeffrey's conditioning and weighted averaging are two famous update rules, in individual and social situations respectively. I will show that both can be axiomatized by means of one and the same invariance principle, related to Carnap's use of invariance in his work on probabilities.
Coherence

Coherence

2015-02-1857:58

Branden Fitelson (Rutgers) gives a talk at the MCMP Colloquium (16 July, 2014) titled "Coherence".
Scott Page (Michigan) gives a talk at the Conference on Agent-Based Modeling in Philosophy (11-13 December, 2014) titled "Collective Accuracy: Agent Based & Emergent vs Statistical and Assumed". Abstract: In this talk, I describe two broad classes of models that can explain collective accuracy, what is more commonly referred to as the wisdom of crowds. The first model is based on statistical/law of large numbers logic. Accuracy emerges from the cancellation of random errors. The second model has roots in computer science and psychology. It assumes that predictions come from models. Different predictions arise because of different model. I then describe how in agent based models the amount model diversity, and therefore the accuracy of the collective emerges. It is possible to write difference equations that explain average diversity levels. The talk will summarize papers written with Lu Hong, Maria Riolo, PJ Lamberson, and Evan Economo.
Michael Weisberg (Pennsylvania) gives a talk at the Conference on Agent-Based Modeling in Philosophy (11-13 December, 2014) titled "Agent-based Models and Confirmation Theory". Abstract: Is it possible to develop a confirmation theory for agent-based models? The are good reasons to be skeptical: Classical confirmation theory explains how empirical evidence bears on the truth of hypotheses and theories, while agent-based models are almost always idealized and hence known to be false. Moreover, classical ideas about confirmation have been developed for relatively simple hypotheses, while even the simplest agent-based models have thousands of variables. Nevertheless, we can draw on ideas from confirmation theory in order to develop an account of agent-based model confirmation. Theorists can confirm hypotheses about model/world relations, and they can also use a variety of techniques to investigate the reliability of model results. This paper is an exploration of these possibilities.
Aidan Lyon (Maryland, MCMP) gives a talk at the MCMP Colloquium (13 November, 2014) titled "I Believe I don't Believe. (And So Can You!)". Abstract: Contemporary epistemology offers us two very different accounts of our epistemic lives. According to Traditional epistemologists, the decisions that we make are motivated by our desires and guided by our beliefs and these beliefs and desires all come in an all-or-nothing form. In contrast, many Bayesian epistemologists say that these beliefs and desires come in degrees and that they should be understood as subjective probabilities and utilities. What are we to make of these different epistemologies? Are the Tradionalists and the Bayesians in disagreement, or are their views compatible with each other? Some Bayesians have challenged the Traditionalists: Bayesian epistemology is more powerful and more general than the Traditional theory, and so we should abandon the notion of all-or-nothing belief as something worthy of philosophical analysis. The Traditionalists have responded to this challenge in various ways. I shall argue that these responses are inadequate and that the challenge lives on.
Alvin I. Goldman (Rutgers) meets Stephan Hartmann (MCMP/LMU) in a joint session on "Bridging the Gap between Informal and Formal Social Epistemology" at the MCMP workshop "Bridges 2014" (2 and 3 Sept, 2014, German House, New York City). The 2-day trans-continental meeting in mathematical philosophy focused on inter-theoretical relations thereby connecting form and content of this philosophical exchange. Idea and motivation: We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Find more about the meeting at www.lmu.de/bridges2014.
loading