Atheism á la Ricoeur
Majid Amini, Virginia State University
Traditionally atheism has been conceived and construed as a rejection of religion and religious faith. However, in his Bampton Lectures at Columbia University under the title of “Religion, Atheism, and Faith,” Paul Ricoeur offers an interpretation of atheism somewhat reminiscent of Hegelian dialectical approach whereby atheism as an antithesis to religion paves the way for a paradoxical synthesis: namely, a new articulation of faith. The purpose of this paper is therefore to appraise Ricoeur’s rendition of the religious significance of atheism against his problematic concepts of religion and atheism.
Utopianism and Political Irrationality
Aaron Ancell, Duke University
People tend to be biased and irrational about politics, yet many normative political theories presuppose or require that people’s political views are responsive to reasons and evidence in rational and unbiased ways. Are such theories utopian in the pejorative sense? David Estlund argues that the answer is “no” because the fact that the normative presuppositions or requirements of a theory are unlikely to be met does not entail that the theory is utopian. I argue that Estlund’s argument is effective only if being rational and unbiased about politics is something people could easily do but are nonetheless unlikely to do. His argument falters insofar as the reason that people are unlikely to be rational and unbiased about politics is that being rational and unbiased about politics is very difficult. Moreoever, I argue that Estlund’s own defense of democracy commits him to eschewing normative requirements that are very difficult to meet.
Spatiotemporal Betterness and Pareto in Infinite Worlds
Amanda Askell, New York University
Infinite worlds are worlds that contain infinitely many agents across spacetime. Such worlds are known to create problems for aggregative ethical theories, because two infinite worlds can have the same total well-being but very different distributions of well-being. For example, a world that contains agents that are each at some positive level of well-being 1, 1, 1, 1, 1,... will have the same total well-being as a world that contains the same population of agents, but every second agent is at some neutral level of well-being: 1, 0, 1, 0 1, 0... Some philosophers, notably Arntzenius (2014) and Vallentyne and Kagan (1997), have argued that aggregative ethical theories can appeal to the distribution of agents across spacetime to solve some of these difficulties. In this paper, I argue that both unrestricted and restricted versions of these spatiotemporal solutions conflict with Pareto: the principle that says that if some agents have more well-being at world w 1 than at w 2 and no agents have lower well-being at w 1 than at w 2 , then w 1 is better than w 2 . I conclude that this gives us strong reasons to reject spatiotemporal betterness principles in infinite ethics.
Worms, Stages, and Names
Harriet Baber, University of San Diego
The stage view, according to which ordinary objects are momentary stages, requires a semantics for expressions that purport to refer to ordinary objects and sentences ascribing properties—including historical and lingering properties—to them. If ordinary objects are stages, which stages do we talk about? On Sider’s original account, we talk about current stages. When none are available, we do not talk about: sentences that appear to be about individuals who have no current stages, he holds, are de dicto, since the selection of any of their stages to secure reference would be arbitrary. I suggest that instead of looking for non-arbitrarily distinguished stages to secure reference, we embrace indecision. On the supervaluationist account sketched here, names refer indeterminately over reference classes they select. This account treats the living and the dead even-handedly: we talk about Socrates in the same way that we talk about our contemporaries.
Aristotle on Truth and Practical Truth: Nicomachean Ethics VI 2
Samuel Baker, University of South Alabama
Scholars have in general found it difficult to make sense of Nicomachean Ethics (NE) VI 2, and in particular its notion of “practical truth” (1139a26-27). In this paper I aim to make progress by means of unexplored route. I argue that that the text of NE VI 2 is not arranged as Aristotle intended it, and I propose—following Gauthier and Jolif—that lines 1139a31-b11 should be placed directly after line 1139a20. Having translated the rearranged text and explained its advantages, I advance an interpretation of Aristotle’s views of truth and practical truth by way of commentary. It is key to my interpretation that truth really is the ergon of thought, and that when Aristotle identifies practical truth as “truth agreeing with correct desire” this correct desire must be powerful enough so as to issue in action. Thus, the akratic does not attain “practical truth.”
David Barack, Columbia University
Cognitive neuroscientists increasingly explain cognitive phenomena by decomposing them into functions that are performed in virtue of the dynamics of neural parts, yielding a componential dynamicism. This componential dynamicism is a species of homuncularism in the philosophy of mind, which analyzes cognitive functions as the results of the states and activities of the parts of cognitive systems. Holistic and piecewise decompositions are two types of homuncular decomposition. The dynamics of neural systems constitute the basic elements of piecewise decompositions, the ground level of a homuncular decomposition, and hence constitute a dynamical homuncularism. Classic views of homuncularism maintain that homuncular analyses proceed by identifying parts that pass meaningful messages. Homuncular dynamicism, however, rejects this thesis of processor separability, instead endorsing the thesis of processor confluence. By executing transformations over the same signaling stream, homuncular dynamicism presents a novel view of how cognitive functions are composed out of the brain’s dynamical processes.
Rima Basu, University of Southern California
A consequence of living in a society shaped by racist attitudes and institutions is that the evidentially rational agent—the one who believes on the basis of the strength of their evidence—may be forced to believe in accordance with statistical evidence that suggests associations they reflectively reject, e.g., an association between black and criminality. Intuitively, such beliefs are our paradigm of morally bad beliefs. But, if racist beliefs were to reflect reality and thus be rationally justified, it seems that the moral ought and the epistemic ought would conflict. I argue that we can explain away this conflict with tools from the broad framework of pragmatic encroachment. Start with the intuitive thought that in some cases there are higher standards of evidence we must meet for our beliefs to count as justified—namely, high stakes cases—I argue that this framework can be extended to capture an additional way in which high stakes can be triggered—a failure to appreciate the risks you put on other people when forming beliefs on the basis of statistical evidence, e.g., believing that a black resident has an open arrest warrant on the basis that 92 percent of black residents have open arrest warrants. This insensitivity to the moral stakes of a situation, like insensitivity to the practical stakes, can make it harder to know and be justified. In turn, the question of what individuals should believe is, in part, a moral one. The moral agent not only does better, they believe better as well.
Two Strawsonian Strategies for Accounting for Morally Responsible Agency
David Beglin, University of California, Riverside
What capacities are required for someone to be a morally responsible agent, the type of agent it is, in principle, appropriate to hold responsible for her conduct or attitudes? For many theorists, inspired by P. F. Strawson, it seems natural to answer this question only by first considering what goes into holding someone responsible. The predominant way theorists have developed this Strawsonian approach has been in terms of the attitudes—“reactive attitudes”—involved in holding someone responsible. For these theorists, what’s required for morally responsible agency depends in some way on the nature of these attitudes. I argue that this approach faces certain challenges, which ultimately stem from the relative superficiality of the reactive attitudes on which it focuses. I proffer an alternative Strawsonian approach, which avoids these challenges precisely by focusing on something more fundamental than the reactive attitudes: the concern that gives rise to them in the first place.
Laser Lights and Desigenr Drugs: The New Face of Ruthlessly Reductive Neuroscience
John Bickle, Mississippi State University
I introduce philosophers to two new experimental tools in neurobiology, optogenetics, and DREADDs (Designer Receptors Exclusively Activated by Designer Drugs). These tools permit unprecedented control over activity in specific neurons in behaving animals. These new tools fill an important methodological gap in the published experimental literature from molecular and cellular neuroscience claiming causal mechanisms for cognitive functions. Two recent approaches in philosophy of neuroscience account for the place of such experiments within the broader discipline, “new mechanism” and “ruthless reductionism.” Published experimental results from increasing use of optogenetics and DREADDs better fit the ruthless reductionist’s direct “mind-to-cells and molecules” linkages picture than the new mechanist’s “nested hierarchies of mechanisms within mechanisms” picture, despite the latter’s popularity across the philosophy of neuroscience.
Grounding, Physicalism, and the Explanatory Gap
Zach Blaesi, University of Texas at Austin
Recently, a number of philosophers have suggested that the notion of ground is needed to successfully formulate physicalism. For there are reasons to think that physicalism understood as a grounding thesis (call it Grounding Physicalism) has advantages over the traditional options. The appeal of Grounding Physicalism is that it promises to occupy a middle position between reductive and non-reductive versions of physicalism. My primary aim is to undermine the enthusiasm for Grounding Physicalism by putting a new spin on a common objection to physicalism: that it leaves an “explanatory gap.” This problem has been heavily discussed, but usually with the assumption that physicalism is an identity thesis. By contrast, I will take Grounding Physicalism as my target and argue that it leaves an explanatory gap—one that cannot be addressed in the usual way. As a preview, suppose that some experience of pain is grounded in a neural event. We can now ask, first, why that experience of pain is grounded in a neural event and, second, why experiences of pain are generally grounded in neural events. I will argue that both questions are to be answered by appealing to “essential truths” about the grounded. The problem is that there just doesn’t seem to be anything about the nature of pain that requires it to be grounded in a neural event or indeed any physical event whatsoever. To bridge this gap, we need a better understanding of the nature of pain. One way to do this is to provide a “real definition” of pain. Yet, if we succeed, we end up with a version of physicalism that has no advantage over traditional reductionism. I conclude that the Grounding Physicalist faces a dilemma: either leave an explanatory gap or abandon the considerations that motivated Grounding Physicalism.
The Evil of Refraining to Save: Liu on the Doctrine of Doing and Allowing
Jacob Blair, California State University, East Bay
In a recent article, Xiaofei Liu seeks to defend, from the standpoint of consequentialism, the Doctrine of Doing and Allowing: DDA. While there are various conceptions of DDA, Liu understands it as the view that it is more difficult to justify doing harm than allowing harm. Liu argues that a typical harm doing involves the production of one more evil and one less good than a typical harm allowing. Thus, prima facie, it takes a greater amount of good to justify doing a certain harm than it does to justify allowing that same harm. In this reply, I argue that Liu fails to show, from within a consequentialist framework, that there is an asymmetry between the evils produced by doing and allowing harm. I conclude with some brief remarks on what may establish such an asymmetry.
The Problem of Unwelcome Epistemic Company
Joshua Blanchard, University of North Carolina at Chapel Hill
Many of us are utterly unmoved when it is pointed out that some morally or intellectually suspect source agrees with our point of view. But while we may tend to find this kind of guilt by epistemic association unproblematic, I argue that this tendency is a mistake. In such cases, we face what I call the problem of unwelcome epistemic company. This is the problem of encountering agreement about the content your belief from a source whose faults give you reason to worry about the belief’s truth, normative status, etiology, or implications. On the basis of an array of cases, I elaborate on four distinct kinds of problems that unwelcome epistemic company might pose. Two of these problems are distinctly epistemic, and two are distinctly moral. I then canvass some possible responses to the problem, ranging from unmovable stubbornness, to an epistemic prudishness that avoids unwelcome company at all costs. Finally, I offer some preliminary lessons of this problem and distinguish it from its close relative, the problem of peer disagreement.
What Extrinsic Dispositions Tell Us about What Dispositions Are
David Blanks, Texas A&M University
A castle that is located on a hill is more likely to withstand an attack than an intrinsic duplicate located at the bottom of a valley since having the high ground provides a defensive advantage. The castle on the hill, then, has the disposition invulnerability while the castle in the valley does not. This is an example of an extrinsic disposition. Not all accounts of what dispositions are are able to accommodate extrinsic dispositions. According to counterfactualism a disposition just is a counterfactual property. The standard accounts have it that causal bases are essential to what dispositions are. A disposition just is a causal basis according to the identity view, and a disposition is the property having a causal basis according to causal functionalism. I argue in this paper that extrinsic dispositions give us a reason to prefer counterfactualism over the standard accounts.
Hobbes on Hope and Deliberation
Christopher Bobier, University of California, Irvine
Philosophers have been interested in Hobbes’s account of deliberation as a series of alternating appetites and aversions ever since his Elements of Law was circulated in 1640. Yet the role of hope in deliberation has been largely overlooked, which is surprising since he writes that “Deliberation therefore requireth in the action deliberated two conditions: one, that it be future; the other, that there be hope of doing it, or possibility of not doing it.” The aim of this paper is to argue that hope plays a uniquely important role in Hobbesian deliberation. This paper therefore highlights the neglected philosophical importance of hope in Hobbes’s thought. I proceed as follows. In section II, I present an overview of Hobbes’s account of deliberation. Then, in section III, I argue that Hobbesian hope is uniquely necessary for the motivating and sustaining of deliberation.
Proper Names and “That”-Clauses: A Dilemma for Millians
Paolo Bonardi, University of California, Los Angeles
Millianism is the doctrine according to which the semantic content of a proper name is exhausted by its referent. The present paper raises and attempts to solve a dilemma for Millians: either a proper name of a truth bearer is in turn a truth bearer (which seems inadmissible); or having a truth bearer as semantic content is not sufficient for a linguistic expression to be a truth bearer (but then what is required for such a purpose?). As it will be shown in the paper, the dilemma does not arise with “that”-clauses in the place of proper names, provided that it is denied both that “that”-clauses are Millian designators and that their semantic content is a truth bearer.
Harming as Difference-Making
Thomas Bontly, University of Connecticut
This paper puts forward an account of harming and benefiting in terms of counterfactual difference-making. Several different views about the nature of harm draw inspiration from the difference-making idea. According to the widely held counterfactual comparative account, for instance, an act (or other event) harms someone if and only if the act leaves that someone worse off than she would otherwise have been—i.e., worse off than she would have been had that act not been performed. However, the counterfactual comparative account is subject to numerous difficulties, including the problems of preemptive harms, harmful benefits, and the non-identity problem. The underlying problem with the counterfactual comparative account, I argue, is that it relies on a naïve and undifferentiated notion of counterfactual difference-making. One thing we learn from recent work on causation and explanation, however, is that there are a number of different ways of making a counterfactual difference. By attending to these distinctions, we can frame a counterfactual difference-making account of harm that solves the aforementioned problems (among others) and is generally fit for service in moral theory.
Intelligibility and the Guise of the Good
Paul Boswell, Université de Montréal
The Guise of the Good (GG) holds an agent only does for a reason what she sees as good in some way. There are two main versions of the theory. According to the attitudinal version, desires have a presenting-as-good character but the good need not figure in desires' contents. According to the rival assertoric version, desires are perception-like representations with normative content. In this paper I present a dilemma for the attitudinal theorist who relies upon GG's ability to account for the intelligibility of action for a reason. I show that the very property GG theories need to answer an objection from Kieran Setiya and Michael Stocker forces them to characterize their view in a way that either favors the assertoric model or cannot explain the intelligibility of action. The upshot is that GG theorists should move to assertoric formulations of the view.
Living and Dying in Four Dimensions
Andrew Brenner, University of Notre Dame
According to four-dimensionalism we persist through time by being temporally extended, and having different temporal parts at each time at which we exist. Eternalism is the view that all times are equally real. In this paper I explore the implications of four-dimensionalist eternalism for how we should think about the evil of death—e.g., whether death is an evil for the one who dies, and why it is an evil—both under the assumption that there isn’t an afterlife and under the assumption that there is an afterlife. Given the assumption that there isn’t an afterlife, it seems as if you should not fear not. You might fear life, however. I impose two constraints on afterlife scenarios. Many widely accepted afterlife scenarios are, subject to those constraints, incompatible with four-dimensionalist eternalism.
Visual and Motor Imagery
Tyler Brooke-Wilson, Massachusetts Institute of Technology
Peter Carruthers defends a view of the mind on which open-ended, creative and conscious thought is made possible by mental imagery. For Carruthers, episodes of mental imagery occur when we attend to sensory models of our intended actions, called “forward models,” while overt action is inhibited. Forward models are the predictions our sensory systems make in light of motor commands to anticipate the consequences of our own behavior. In this paper, I argue that, pace Carruthers, mental imagery is not a single process. Rather, there are at least two, and possibly many more, mental imageries. Drawing on literatures in psychology and neuroscience, I offer arguments to the effect that motor imagery and visual imagery are distinct processes. Carruthers’ attended forward model offers, at most, a plausible account of one of them. This leaves the best studied of mental imageries, visual imagery, without a plausible mechanism.
From Nature to Second Nature: The Evolution of Bergson’s Conception of Habit
Olivia Brown, Katholieke Universiteit Leuven
Henri Bergson is one of the few thinkers in the history of philosophy who both explicitly and extensively discusses the phenomenon of habit. In view of his long engagement with habit, does he develop a philosophically robust account of the phenomenon? Most scholarship on Bergson’s conception of habit refers to his early work, Matter and Memory. As I argue in the first part of this paper, Bergson’s account of habit in this text is problematic because he does not adequately differentiate between habit and matter. He also discusses habit in the first part of his last major work, The Two Sources of Morality and Religion. Although Bergson’s account of habit in The Two Sources has been largely overlooked in secondary literature, as I argue in the second part of my paper, his concept of habit as our social nature brings a new aspect of the phenomenon to light, making an original contribution to the philosophy of habit: rather than a tendency that is hard to resist, habit is primarily a resistance to which we tend to give in.
How Should We Think about Perceptual Presence?
Alessandra Buccella, University of Pittsburgh
In perceptual experience, ordinary, three-dimensional objects like tomatoes, cats, computers, etc. seem to be present to our consciousness. In this paper, I will examine two accounts of perceptual presence. The first one is presented and defended by Alva Noë. Noë contends that perceptual presence of ordinary things is “virtual,” that is, it is a sort of “presence-in-absence” granted by the cooperation of two other elements: appearances (or looks), and what he calls “sensorimotor knowledge”. The second account comes out of some ideas by Merleau-Ponty (1945). He argues that ordinary objects are present to consciousness as the normative background that constitutes part of our perceptual phenomenology. Objects, according to Merleau-Ponty, transcend every possible appearance: they are “seen from everywhere” (p. 71). After pointing out some problems for Noë’s virtual presence account, I will conclude that Merleau-Ponty’s solution has to be preferred, despite its nonconventionality.
Is Nihilism Self-Defeating?
Spencer Case, University of Colorado Boulder
Total normative nihilism is the view that there are no normative reasons of any kind – including epistemic reasons. This view has been criticized as self-defeating because those who accept it seem to be committed to saying: “Nihilism is true, but I have no epistemic reason to believe that.” One strategy for defending nihilism is to disambiguate between two different senses of “reason.” The nihilist, allegedly, can coherently say that he has epistemic reasons to accept his own arguments for nihilism—if all this means is that they are justified according to the accepted epistemic standards—while denying that these standards are normatively authoritative. In what follows, I consider three ways this response might be cashed out and argue that none can overcome the self-defeat charge.
Humanity as an End in Itself: Respect for Humanity Refers to Respect for Personality
Bowen Chan, University of Toronto
The traditional interpretations of Kant’s account of humanity as an end in itself mischaracterizes humanity as a general capacity to choose ends, and hence cannot adequately account for the dignity of humanity. As an alternative, I argue that, for Kant, humanity, at least in the Groundwork, should not characterized as a general capacity to choose ends, but to choose ends freely in a strict sense, and hence to choose ends from respect for the moral law. Humanity, therefore, should be understood to refer to personality, and hence personality should be the target of respect as an end in itself.
Lockean Responses to the Problem of Perceptual Error
Ronald Claypool, University of Florida
Locke says both that our simple ideas are all real, adequate, and true and that our idea of figure is a simple idea. As Antonia LoLordo has observed, this seems to leave Locke with a serious problem with perceptual error. On the one hand, we seem to be susceptible to perceptual error, as for instance when a distant square tower appears round to us, but on the other Locke’s position seems to be that such a round idea must be real, adequate, and true, because it is simple. I consider the situation and what Locke says in the Essay and argue that Locke does not actually have a problem with perceptual error at all. Ideas of particular shapes, rather than the idea of having shape, should not be understood to be simple ideas at all, and the distinction between imagistic and non-imagistic ideas eliminates any problem of perceptual error.
Cognitivism, Motivation, and Dual-Process Approaches to Normative Judgment
Brendan Cline, University at Buffalo
Expressivists argue that the best explanation for the intimate relationship between normative judgment and motivation is that normative judgments are noncognitive, desire-like states. Normative statements are then construed as expressions of these noncognitive states. In this paper, I draw on dual-process models in cognitive psychology to respond to this argument. According to my proposal, normative judgments are ordinary beliefs that are typically produced by two kinds of process: intuitive-affective processes and domain-general reasoning. When produced by the first kind of process, motivation and judgment tend to align. When produced by the second kind, motivation and judgment might not align. Since first kind of process is the most common pathway of normative belief formation, normative judgments are typically accompanied by aligning motivation. This proposal enables cognitivists to explain the intimate link between normative judgment and motivation, thereby removing the major obstacle to interpreting normative statements truth-conditionally.
On the Reasonability of Deep Reasons
Marilie Coetsee, Rutgers University
Political liberals deny that there is any political obligation to offer Unreasonable citizens justifications for legislation that they can, given their Unreasonable values, accept. Some political liberals also deny something further—namely, that there is any political obligation to offer Unreasonable citizens “Deep” comprehensive doctrine-based reasons for why they should reject the Unreasonable values and adopt more Reasonable ones instead. In this paper I argue that Reasonability itself upends the latter denial—that Reasonability requires us to offer Unreasonable citizens Deep Reasons to be Reasonable. This requirement is rooted in the fact that Reasonability compels us to show regard for Unreasonable persons’ status as free and equal, and that showing regard for that status, involves showing regard for their capacity for a sense of justice. The failure to offer Deep Reasons, I claim, shows disregard for that capacity, and so also, by virtue of that, for their freedom and equality.
Mechanist Explanation and the Characterization of Phenomena
David Colaco, University of Pittsburgh
In their recent book, Carl Craver and Lindley Darden provide an account of the characterization of scientific phenomena in their framework for the discovery of mechanisms. While this account makes salient the need to provide an analysis of this descriptive scientific practice, their account of the characterization fails to correctly reflect the character of the identification process. This is because their account does not correctly address the evaluative components of the process, and does not identify the scientific strategies employed in the process. I unpack and discuss the implications of the account, and criticize them concretely with appeal to historical research on the phenomenon of long-term potentiation (LTP).
Something Stinks: Smell and the Problem of Secondary Qualities
Jack Collins, Mercy College
Most philosophical discourse on secondary qualities defers to the example of color, while often neglecting the causes of non-visual sensation. Such visually-oriented accounts of secondary qualities—whether as mental states, instantiated universals, tropes, or causal powers—fail to do justice to the sense of smell, both in terms of its mechanism and the qualitative experience involved. Any attempts to understand smell as a quality supervening on a specific external object require ignoring both our scientific understanding of olfaction and our everyday experience of smells and smelly things. This paper suggests a naturalized account of the sense of smell (influenced by, but critical of, the Quinean “dispositional” view) that describes how sensation happens as a complex of entities, processes, and relations. This approach respects our scientific understanding and minimizes ontological commitments, yet still preserves our common-sense understanding of qualification of the objects of experience.
Locke on the Difficulty of Demonstration
Patrick Connolly, Iowa State University
Locke famously claimed that morality was capable of demonstration. But he also refused to provide a system of demonstrative morality. This paper addresses the mismatch between Locke’s stated views and his actual philosophical practice. While Locke’s claims about demonstrative morality have received a lot of attention it is rare to see them discussed in the context of his general theory of demonstration and his specific discussions of demonstration. This paper explores Locke’s general remarks about demonstration as well as his claims about demonstration in natural philosophy, mathematics, and morality. Careful attention to these detailed discussions motivates a reevaluation of Locke’s views on demonstrative knowledge of morality. Specifically, while Locke did believe that some demonstrative moral knowledge might be in-principle available to us he also believed that facts about the difficulty of demonstration meant that this knowledge would in-practice be largely unattainable.
Interpretivism and Norms
Devin Curry, University of Pennsylvania
I will begin this talk by arguing that Donald Davidson held belief to be constitutively normative. Timothy Schroeder has recently—rightly—interpreted Davidson’s rationality norm of belief to lack genuine normative force. Davidsonian beliefs are not constituted by (normatively forceful) norms of belief. But they are constituted via norms of interpretation, which derive genuine normative force from the social practice of triangulation. I will go on to challenge how interpretivists have traditionally construed the relationship between belief and norms of interpretation. Recent work by philosophers and psychologists has revealed that human practices of belief attribution are governed by a rich diversity of normative standards. In light of this research, I will argue that interpretivists face a dilemma: either give up on the idea that belief is constitutively normative or countenance a context-sensitive disjunction of norms that (partly) constitute belief. Either way, interpretivists should embrace the intersubjective indeterminacy of belief.
Indeterminate Perception and Color Relationism
Brian Cutter, University of Notre Dame
One of the most important objections to sense data theory comes from the phenomenon of indeterminate perception. For example, an object seen in the periphery of one’s visual field might look red without looking to have any determinate shade of red. Since sense data are supposed to have precisely the properties that sensibly appear to us, sense data theory evidently has the absurd consequence that a sense datum can have a determinable property without having any of its determinates. In this paper, I show that a parallel objection applies to standard forms of color relationism. In light of the phenomenon of indeterminate perception, the color relationist must either reject intuitively obvious facts about the determinate-determinable structure of color space (e.g., that red is a determinable) or reject the plausible and widely accepted principle that nothing can have a determinable without having one of its determinates.
Spinoza on the Being-Thing Distinction
Stephen Daniel, Texas A&M University
For Spinoza substance and attributes are beings (entia), not things (res). This distinction, which has its origin in Avicenna and is developed by Aquinas and Suàrez, reframes the question not only of what Spinoza means by saying that substance has infinite attributes but also whether his doctrine of attributes should be cast in either subjectivist vs. objectivist terms. I argue that Spinoza’s letters and Short Treatise are important for understanding how to interpret his doctrine of attributes.
What Must We Know to Benefit from Aristotle’s Ethics?
Carlo DaVia, Fordham University
Although Aristotle explicitly states that grasping “the that” is a prerequisite for benefiting from his ethics, there is curiously little scholarly consensus as to what “the that” in ethics refers. In this paper I consequently raise anew two exegetical questions: (i) according to Aristotle, what is “the that” in ethics?; and (ii) what role does “the that” play in Aristotle’s ethical works? In order to answer these questions, I argue that, despite having a different goal and subject matter, Aristotle intends “the that” in ethics to possess the same meaning, mutatis mutandi, as “the that” in demonstrative science, namely: it a true judgment asserting either that some subject S exists, or that some attribute P necessarily belongs to subject S. Given this understanding of “the that” in ethics, I argue that there are two respects in which grasping the causal explanations of ethical facts can make us better. First, doing so clarifies our practical aims and thereby helps us achieve them. Second, grasping “the why” not only clarifies our practical aims, but also dispels confusions that prevent us from successfully achieving them.
Testimonial Injustice in Philosophical Discourse
Emmalon Davis, Indiana University Bloomington
In this paper, I argue that there are two forms of testimonial injustice: (1) identity-based testimonial injustice, the phenomenon elucidated by Miranda Fricker in 2007, according to which a speaker’s credibility is assessed in prejudicially deficient ways in virtue of the speaker’s social identity (that is, her gender, race, etc.) and (2) content-based testimonial injustice, a parallel phenomenon in which a speaker’s credibility is judged in prejudicially deficient ways in virtue of the kind of information the speaker attempts to convey (that is, in virtue of gendered or racialized elements of that testimony). Second, I suggest that both identity-based and content-based testimonial injustice are prevalent in philosophical discourse. Finally, I argue that the prevalence of these two forms of testimonial injustice can, in part, explain the under-representation of women and people of color in philosophy, as well as the under-representation of feminist philosophers and philosophers of race in the discipline.
Plural Slot Theory
T. Scott Dixon, Ashoka University
Cody Gilmore (2013) argues for slot theory, the view that a property or relation is n-adic if and only if there are exactly n slots in it, and that each slot may be occupied by at most one entity in any completion of that property or relation. This view bears the full brunt of Fine's (2000) symmetric completions and conflicting adicities problems. I develop an alternative, plural slot theory (or pocket theory), which avoids these problems, key elements of which have been considered by Yi (1999), McKay (2006), and Gilmore himself (2013). Like the slot theorist, the pocket theorist posits entities (pockets) in properties and relations that can be occupied. But unlike the slot theorist, the pocket theorist denies that at most one entity can occupy any one of them in any completion of a property or relation. As a result, she must also deny that the adicity of a property or relation is equal to the number of occupiable entities in it. By abandoning these theses, however, the pocket theorist is able to avoid Fine's problems, resulting in a stronger theory about the internal structure of properties and relations.
Laws of Nature, Prediction, and Reductionism
Christopher Dorst, University of North Carolina at Chapel Hill
In this paper I argue that the laws of nature are predictively useful. In a sense, such a claim hardly needs defending. It is well known that scientists, engineers, meteorologists, and others routinely use the laws of nature to make predictions about a great variety of phenomena. But I think the laws' predictive utility has been underappreciated in the current philosophical literature on laws. Here I argue that there are a number of conspicuous features possessed by actual putative laws of nature that lend them their predictive utility. I then argue that the presence of these features is difficult to account for on a variety of nonreductive theories of laws. Conversely, I suggest that reductive theories of laws have a much easier way of explaining the laws' predictive utility. I take this to be a general consideration in favor of reductive theories.
Two Cheers for Akrasia
Kevin Dorst, Massachusetts Institute of Technology
An intuitive “bridging” principle between first- and higher-order attitudes is Enkrasia: your confidence in p should line up with your estimate of how confident you should be in p. It has been increasingly recognized that Enkrasia is equivalent to Access Internalism—the claim that you’re always in a position to tell how confident you should be in a given proposition. After explaining this connection, I provide two independent arguments that Access Internalism (and hence Enkrasia) is false. That is, sometimes you should be uncertain of what you should think—higher-order uncertainty can be rational. My first argument is based on the fact that sometimes peer disagreement can force us to revise our credences. My second is based on the fact that sometimes a body of information is accessible but not surveyable: each part of it is available to us even though the body as a whole is not. Upshot: Enkrasia is false. However, I suggest this conclusion may pave the way to endorsing weaker bridging principles between first- and higher-order attitudes.
Ioan Dragos, University of Toronto
Epistemic extendedness is the increasingly popular idea that knowledge can sometimes be attributed to an individual subject even when she cannot alone satisfy the normative conditions on knowledge. I claim, first, that a fundamental difference between knowledge-that and knowledge-how is that the former can be epistemically extended and the latter cannot. Furthermore, since there are cases of knowledge-how for which no individual can alone satisfy the normative conditions on knowledge, knowledge-how must sometimes be attributed to groups. That is, group know-how sometimes obtains. A corollary of this argument is that anti-intellectualism about the relationship between knowledge-that and knowledge-how must be correct.
John Dyck, The Graduate Center, CUNY
Everyone agrees that musical works have certain constitutive elements: tone, harmony, rhythm, etc. I argue that space can also be an element of musical works: some works are constituted by distance and movement of sound sources. I begin by describing works of spatial music. I argue that spatial music is a common phenomenon, since much music is latently spatial. I then consider arguments against the possibility of spatial music. The most substantial of these arguments, from Julian Dodd, relies upon a conception of music as internalist and purely phenomenological. I argue against the internalist conception. Since music isn’t just in the head, it can be spatial. I then argue that musical Platonism, a prominent account of musical ontology, is committed to the internalist conception. This is a problem for the view.
Do Visual Hallucinations Involve Perception?
Rami El Ali, Lebanese American University
Accepting visual hallucinatory perception is accepting that all visual hallucinations constitutively involve visual perception (henceforth I omit “visual”). Because contemporary views reject hallucinatory perception, they deny that perception can be fundamental to all visual experience. Instead, experiences are construed as fundamentally representational, or fundamentally disjunctive. But rejecting hallucinatory perception is a significant conclusion given the impact of misperception on theories of experience. I argue that one central argument for denying the constitutive role of perception in hallucination, focusing on the existence of “hard” hallucinations, fails to motivate a nonperceptual view of hallucinations.
Vagueness and Pessimism about Climate Rationality
Luke Elson, University of Reading
I consider the challenge of climate change from the perspective of rational choice, rather than climate justice. Along with Chrisoula Andreou, I note the similarities of carbon emissions to Warren Quinn’s “Puzzle of the Self-Torturer.” But pace Andreou and Quinn, I argue that this engenders a paradox of the sorites, and that to avoid environmental disaster we must arbitrarily choose an indeterminately-optimum level of emissions and environmental damage. I end on a pessimistic note, and argue that no government (and certainly no open, democratic government) is likely to be able to do this whilst being honest with its citizens about the arbitrariness involved.
Thinning the Veil: Mills, Rawls, and Identity
Victoria Emery, Fordham University
Charles W. Mills calls his readers to recognize the racist and colonial foundations on which our current liberal state is built, claiming that there is an unacknowledged contract—“the racial contract”—that keeps in place a white supremacist regime. In this paper I take up a proposal from Mills to re-think John Rawls’ veil of ignorance thought experiment. I critique Mills via a narrative conception of identity. I aim to demonstrate the impossibility of coherently isolating individual features of identity, highlighting their contingent nature. This analysis has implications for a broader conversation about identity politics.
Origins of Attention to Objects and Mental Oil
Emma Esmaili, University of British Columbia
This paper challenges two widespread assumptions about the nature of visual object perception and attention, particularly in early development, which are foundationally assumed by both sides in various longstanding debates, such as the transparency debate in philosophy, and empiricist/nativist debates in the sciences: (a) Basic feature perception and attention scaffold object perception, and are continuous through the life-span. (b) Feature-detection is characterized via an objectivity/subjectivity divide: feature-detection either involves objective states accessed by external attention, or may also involve subjective states accessible by internal attention, and further, in early development, feature-detection at first involves only one of these states/attention. Philosophical positions adopting these assumptions were recently supported by empirical studies of attention. Yet, I argue, new theories of attentional development obviate both assumptions, debilitating traditional positions. However, I further argue, the transparency debate’s notion of “mental oil” helps reconceive of the foundations of object processing in accordance with new theories.
Tipsy Sex: When Sex Under the Influence Becomes Morally Problematic
Andreas Falke, University of Florida
Due to shockingly high sexual assault rates, both the media and ethicists tend to focus on sexual violence rather than sexual encounters in general when discussing sex under the influence. Cases involving alcohol but no sexual violence have received relatively little attention. This is unfortunate for several reasons: a) morally problematic non-violent cases of sex under the influence are more common than date rape and other forms of sexual violence, b) there is little awareness of how serious of a moral wrongdoing such cases involve, partly because they lack social reprobation, and c) these reasons generalize and explain why some types of non-sexual manipulation are morally more problematic than socially acknowledged. By utilizing higher-order propositional attitudes, I offer a simple but not simplistic framework for analyzing such cases that can be used in educational contexts to raise awareness and sensitize students to the moral issues involved.
On the Scope of Immediate Perceptual Justification
Megan Feeney, Rutgers University
Modest foundationalists hold that our beliefs about the external world can be immediately justified by perceptual experience, that is, justified not even partly in virtue of our holding other justified beliefs. However, it’s plausible that while certain beliefs might be immediately justified by perceptual experience (e.g., the table is brown), other beliefs could not be (e.g., the table was made in 1958). Thus, the modest foundationalist must offer some way of delineating the scope of immediately justified perceptual belief. In this paper, I’ll discuss a class of cases that should constrain the modest foundationalist’s account of the scope of immediate perceptual justification. In these mismatch cases, subjects are immediately perceptually justified in holding beliefs that outrun the contents of the experiences on which they are based. I argue that phenomenal conservatism, an important kind of modest foundationalism, struggles to explain how subjects in mismatch cases could be justified.
A New Understanding of Perception and Cognitive Penetration
Katherine Finley, University of Notre Dame
Cognitive Penetration is an increasingly popular, and controversial, theory in philosophy and psychology which holds that our mental states can and sometimes do directly affect our perceptual experiences. The majority of those arguing either for or against Cognitive Penetration worry that if it occurs, its effects are, for the most part, epistemically pernicious. This worry relies on a well-accepted understanding of perception which has recently been challenged by new research in cognitive science. Reframing our understanding of perception and Cognitive Penetration processes in light of this research presents a compelling way of understanding the effects of Cognitive Penetration as epistemically beneficial. I first briefly present my theory of Cognitive Penetration, and the “epistemic perniciousness worry.” I then outline a new understanding of perception and sketch an account of Cognitive Penetration in light of this new understanding, according to which the effects of Cognitive Penetration are often epistemically beneficial.
The Birth of Carnap's Internal/External Distinction
Vera Flocke, New York University
This paper discusses a crucial early application of Carnap’s internal/external distinction. I show that Carnap distinguished between the kinds of questions that in his 1950 article “Empiricism, Semantics and Ontology” he calls “internal” and “external” already in The Logical Syntax of Language (1937 ), where this distinction was part of Carnap’s solution to deep problems concerning the foundations of mathematics. Carnap wanted to show that one can accept simple type theory and the impredicative definitions that it condones without committing oneself to a Platonist view with respect to the properties to which impredicative definitions refer. To make this point, Carnap argued that the decision between simple and ramified type theory amounts to a decision between alternative definitions of mathematical truth. The core of Carnap’s view thereby is that, he thought, the relevant meta-linguistic truth-definitions can be justified only by meta-metalinguistic truth-definitions. The entities to which object-language sentences refer however are inessential in this regard.
Forgiving and Forgetting
Elizabeth Foreman, Missouri State University
In this paper I explore the question of whether or not forgiveness can be morally required by examining the degree to which it is under our control. Forgiveness has been defined as “forswearing resentment for moral reasons,” but this makes forgiveness sound as if it is more under our control than it actually seems to be. I argue that understanding forgiveness as adopting, rather than forswearing, an attitude, can accommodate the way in which forgiveness seems hard to willfully effect; such an understanding highlights the role that forgetting plays in forgiveness, but still accommodates the intuition that forgiveness can be morally required. When we say “forgive and forget,” we are not enjoining others to “wash away” another’s transgressions, nor to forswear legitimate resentment for moral reasons; we are enjoining others to come to see the transgressor differently, as one who has done wrong but is not defined by it.
State Authority by Convention and Fairness
Nicolas Frank, Lynchburg College
The arguments of philosophical anarchists and other skeptics of political obligation have changed the tone of discussion on the topic in recent decades. While traditional accounts attempted to find a general (i.e., applying to all or most subjects) and universal (i.e., applying to all or most laws) duty to obey the law, contemporary political theorists might reject either or both of these elements. I provide a surprisingly simple (non-voluntaristic) fairness account of the duty to obey the law that overcomes many of the worst problems associated with other accounts that utilize fairness. In some cases, we ought, out of fairness, to obey social conventions. The state plays a unique role in establishing, maintaining, changing conventions via the law. Thus, we ought to obey the law when it represents a conventional solution to what I will call “moral coordination problems.”
Schopenhauer and Kant: Teleology and the Meanings in Intentional Cognition
Christopher French, Suffolk County Community College
The relationship between Schopenhauer's overall philosophy of self and world and Kant's Critique of Judgment is not frequently explored. To be sure, that Schopenhauer takes up certain points from the Critique of Judgment concerning the apprehension of beauty and art is fairly well known to scholars. However, this essay shows how a central aspect of Schopenhauer's entire metaphysics can be understood to grow from out of concerns first expressed in Kant's third critique. In particular, it shows how Schopenhauer's general philosophy of the will reflects ideas first suggested in Kant's book, in which we find an attempt to bring together into a single picture the notions of intention and nature's lawfulness.
Resisting Oppression Together: Participatory Intentions and Unequal Agents
Christina Friedlaender, University of Memphis
One striking feature of anti-oppression movements is the range of unequally situated participants. On Christopher Kutz’s account of collective action, however, participatory intentions are sufficient for collective action. Knowledge of agents’ identities or social locations is not required. Using the example of participation in anti-oppression movements, I argue that participatory intentions are not sufficient for an account of collective action precisely because of the fact that participants are unequally situated. First, unequally situated agents lack a shared epistemic framework, affecting the relationship between participatory intentions and actual participation. Second, vulnerable populations have a different relationship to goal formation and participation. Third, unequally situated agents can distort collective goal formation, which requires us attend to the particular identities and social locations of participants.
A Working Test for Well-Being
Tobias Fuchs, Brown University
In order to make progress in the welfare debate, we need a way to decide whether certain cases depict changes in well-being or not. I argue that an intuitive idea by Nagel can be developed into a test to that purpose. I discuss a version of such a test proposed by Brad Hooker, and I argue that it is unsuccessful. I then present my own test, which relies on the claim that if compassion is fitting towards a person due to her having (or lacking) certain properties, then we know that having (or lacking) those properties affect the person’s well-being. I show how my test yields results in cases of cheating and deception, which have implications for central questions in the literature on well-being, such as whether what you do not experience can affect your well-being (the so-called Experience Requirement).
The Challenge to Race Eliminativism from Implicit Bias Research
Timothy Fuller, Yonsei University
This presentation pursues a novel objection to the view that we ought to eliminate racial categories from most of our thinking and communicating. Drawing on recent research into implicit racial biases, I argue that race eliminativism is in tension with several known methods for reducing implicit prejudice. Given that a wide range of discriminatory behaviors have been correlated with measures of implicit biases, eliminating racial concepts and racial language is objectionable for recommending that we relinquish tools with the potential for mitigating discrimination. Thus, implicit bias research supports the view that we ought to conserve racial categories in our thought and communication, while striving to excise their harmful effects.
Is Malebranche's God in Time?
Torrance Fung, University of Virginia
Orthodoxy has it that Malebranche’s God is atemporal. I argue that if we take seriously his ontology of how created beings have their properties, then we should think Malebranche’s God is temporal after all. Just as Malebranche thought bodies imitate or participate in God’s omnipresence by existing in space, so all creatures imitate God’s omnitemporality by being in time.
The Role of Dialogs and Soliloquies in Disagreements Involving Predicates of Personal Taste
Heidi Furey, University of Massachusetts Lowell
Linguistic data derived from examples of gustatory “faultless disagreement” often play a major role in deciding the correct semantics of predicates of personal taste such as “tasty.” In “Perspectives in Taste Predicates and Epistemic Modals” Jonathan Schaffer argues that we have been too quick to think that explaining faultlessness is the only thing—or even the most important thing—that needs to be addressed regarding taste disagreements. When we are provided with additional contextual information, or alternative continuations of the dialog, we may not have the intuition that the dialog is faultless. One of the contextual complexities Shaffer examines is the role of dialogues and soliloquies in generating intuitions of faultless disagreement. Schaffer argues that his view—a form of contextualism called—meaning perspectivialism—can explain linguist data generated by cases of dialogues and soliloquies in a way that relativism cannot. In making his case for Meaning perspectivialsm, Schaffer offers one of the most robust and compelling treatments of taste disagreements in the current literature. However, I will argue that although Schaffer’s explanation of the contextual effect of dialogues and soliloquies may be effective, it does not help meaning perspectivialism gain a theoretical advantage over relativism with regard to taste disagreements. I claim that once we uncover the basis for the linguistic intuitions generated by the cases Schaffer presents, we can see that the relativist can accommodate the data as well as meaning perspectivialism.
Understanding and Emulation
Georgina Gardiner, Rutgers University
In Knowledge and the State of Nature Edward Craig hypothesises that knowledge attributions evolved to fulfil a function; specifically, they served to tag good informants. Craig draws on this hypothesis about the function of knowledge attributions to illuminate the contours of the concept of knowledge. I build on this idea: I explore the proposal that understanding attributions also evolved to fulfil a function, namely serving to tag those worthy of intellectual emulation. I first describe characteristic features of understanding. I then argue that these features of understanding accord well with the proposal that understanding attributions tag those worthy of intellectual emulation, and I explore what this hypothesis suggests about the nature and value of understanding. In conclusion I examine competing hypotheses both for alternative functions understanding attributions might serve, and for what else may have served this function.
The Threefold Function of the Imagination in the Critique of Pure Reason
Gerad Gentry, University of South Carolina and Yale University
This article provides a unified account of the imagination [Einbildungskraft] in Kant's first Critique. Kant attributes a startling range of roles to the imagination. These are categorizable according to its three major functions, (1) “transcendental,” (2) “a priori,” and (3) “empirical,” through which it variously relates to the transcendental unity of apperception, yields the schema which makes possible the schematism, and produces the images for objects of experience. In order to unify these functions as belonging to one and the same imagination, I postulate four formal conditions of the imagination itself. These conditions are those formal features that condition all of its functions. I then proceed to show how it is that Kant sees the imagination as somehow simultaneously making possible the pure understanding, yielding the schemata, and that which yields empirical intuitions from objects of experience. It is important to remember throughout this analysis of the transcendental imagination that we cannot interpret a priori intuitions or pure intuitions, which Kant calls “transcendental products of the imagination,” as pertaining to some noumenal realm or to objects as things-in-themselves. This article results in a clear formulation of the functions of the imagination that must necessarily condition its activity as a structural feature of the mind necessitate for the possibility of experience. I hereby defend a unified account of the transcendental functions of the imagination as one and the same core feature of the mind as developed in the Critique of Pure Reason. A further significance of this article is that it implicitly lays the groundwork for a larger argument that takes up Kant’s development of the imagination in both the first and third Critique while also paving the way for making sense of the significance that Fichte and Hegel place on the imagination in their interpretations of Kant’s Idealism.
Counterfactuals and Laws with Violations
Cameron Gibbs, University of Massachusetts Amherst
Evaluating counterfactuals in worlds with deterministic laws poses a puzzle. If a non-actual event obtained, then it seems that either the past would be different or the laws would be different. But both options are unintuitive. To avoid this dilemma, some have put forward a view of laws of nature that allows that the laws have violations. This allows that if a non-actual event obtained, then the past and the laws would hold, but the laws would have a violation. I argue that even if we grant this view, there are still counterfactuals where either the past is changed or the laws don’t hold. These cases involve considering a counterfactual supposition, and then, in that supposition, we consider another counterfactual supposition, and so on. These cases are just as troubling as the original cases that motivated allowing that laws have violations, undercutting one piece of support for this view.
Freedom and the Value of Games
Jonathan Gingerich, University of California, Los Angeles
This essay explores the aesthetic features in virtue of which games succeed as games. Thomas Hurka has recently argued that the goodness of games lies in their complexity: they structure simple activities into more complicated (and therefore more worthwhile) activities. I argue that an important element of the value of games is their ability to provide players with an experience of freedom, which they provide both as paradigmatically voluntary activities and by offering opportunities for relatively unconstrained choice inside the “lusory” world that players inhabit. I develop this argument through, first, a conceptual analysis of games and, second, a description of the phenomenology of games that provide their players with the experience of freedom and the formal techniques that they use to provide this experience. I then argue that Hurka’s thesis should be amended to reflect the centrality of this value to many games.
Disbelief as Mere Belief
Javier Gomez-Lavin, The Graduate Center, CUNY
Disbelieving some proposition p is often glossed as merely believing not p. However there are various dialethic considerations that push against such a generalization. In this vein, Graham Priest (forthcoming) motivates an account of disbelief in which to disbelieve p is to not believe p. However this raises an important question: Just what kind of mental state is not believing? This paper leans on a Spinozan account of belief-fixation described by Mandelbaum (2014) in which any tokened proposition is passively believed. Using this framework, I argue that disbelief is a consciously accessible belief-like state that reports the status of other beliefs. Working through this account of disbelief also allows us to refine the Spinozan model of belief-fixation. Such an account of disbelief may help dissolve philosophical problems of self-deception and various paradoxes of fiction.
How Should Deep Self-Theorists Account for Weakness of Will?
August Gorman, University of Southern California
Deep self-views thus face an important objection: they counter-intuitively hold that we are not responsible for weak-willed acts, and thus fail to provide a necessary condition for attributional-responsibility. In this talk I will argue that in response to this serious problem, deep self-theorists should undergo a radical shift in the way they conceptualize deep self-views, which nevertheless preserves many of their attractions. After presenting a general strategy I go on to argue in favor of a particular modified view, a new view that I call the Mosaic Endorsement view.
Philosophizing from Experience: First-Person Accounts and Epistemic Justice
Abigail Gosselin, Regis University
Philosophers sometimes find it theoretically fruitful to share stories of personal experiences that illustrate a phenomenon or serve as a basis for philosophical analysis. Readers/listeners frequently praise authors for being brave in sharing their stories, particularly when these stories are of experiences which, when disclosed, make them vulnerable. One form of vulnerability that authors face is vulnerability to epistemic harm or even injustice. When people share experiences or disclose information that suggests (rightly or wrongly) that their rationality is impaired, this may diminish their epistemic credibility, and people may suffer injustice when interlocutors make false assumptions or over-generalize about the scope and severity of a person’s impairment. In professional contexts, perceived impairments in rationality can also diminish professional credibility. In this paper I analyze this vulnerability to epistemic injustice, and I consider the epistemic conditions that are required to ensure epistemic justice when first-person accounts are shared in professional contexts.
The Real Simple Argument for Higher-Order Theories
Joseph Gottlieb, Texas Tech University
Broadly construed, there are two main competing theories of phenomenal consciousness: Higher-Order theories and First-Order theories. Higher-Order theories claim that phenomenally conscious mental states are those we are aware of in some suitable way. First-Order theories are best understood negatively as denying this claim. William Lycan (2001) has given a purportedly simple argument for Higher-Order theories of consciousness. This argument, at least when interpreted as advancing a genuine competitor to First-Order theories, is dialectically ineffective. This paper tries to do better. I advance what I call “The Real Simple Argument” for Higher-Order theories. Though not as simple as Lycan’s, it is about as simple as we can get, while still remaining a real argument.
Harms (… and Goods): the Legacy of the Atrocity Paradigm as a Normative Theory
Jill Graper Hernandez, University of Texas at San Antonio
It has been argued that Claudia Card’s atrocity paradigm would have to abandon its commitment to the negative transmutativity of atrocious harm because certain goods can change lives that have suffered systemic, denigrating harm. This paper argues that a better course of action for the atrocity paradigm is to integrate positive transmutativity into the paradigm, with the result that the atrocity paradigm can become a fuller normative theory, replete with wrong-and-right making criteria for action. This paper raises new challenges for the atrocity paradigm as a way to strengthen the paradigm’s overall goal of preventing the perpetuation of atrocious harms. In what follows, I briefly outline the systematicity and transmutativity conditions of an ‘atrocity’, demonstrate how so-called “transmuted goods” threaten the transmutativity condition of an atrocity, and integrate a theory of transmuted goods into the atrocity paradigm, to better cast the paradigm as a contemporary normative theory.
Complex Thought and Private Language
Sebastian Greve, Oxford University
This paper argues that there is a conception of private language that provides an interesting sense in which there actually is such a thing as private language (contrary to what some followers of Wittgenstein have claimed); and that this conception can be used to illuminate the characteristic difficulty of understanding each other in philosophy, including possible strategies to tackle this kind of difficulty. The resulting account is contrasted with recent work by David Chalmers on verbal disputes.
Arbitrary Reference Is Pluri-Reference
Eric Guindon, University of Connecticut
In reasoning to a universal conclusion or from an existential premise, it is common to deploy names introduced into the reasoning via stipulations. (Breckenridge and Magidor 2012) defends the arbitrary reference (AR) view, according to which instantial names refer to particular, albeit arbitrarily selected, individuals. I begin by highlighting two problems for (AR). I then sketch an alternative view that avoids them. According to it, instantial names have universal pluri-reference: they refer to each and every object in the domain. Sentences that contain pluri-referring names express multiple propositions, each of which results from composing a referent for each pluri-referring name with the semantic values of other constituents of the sentence. I then show how to account for inference using such sentences, and how this results in the agent who engages in valid instantial reasoning simultaneously running through a multiplicity of valid arguments.
Desire and Loving for Properties
Yongming Han, Brown University
The simple desire-based view of love denies that love is reason-responsive: it says that love is a strong, intrinsic desire for the beloved’s good, and desires are not responses to reasons. An influential challenge for such views arises from how love can seem reason-responsive—from how we often come to love non-family-members in the wake of liking their properties. Call this datum Coming to Love. I’ll argue that that the desire-based view can explain data like Coming to Love: it can explain why love often follows in the wake of liking the beloved's properties, even though it’s inherently not reason-responsive. Indeed, the desire-based view can provide a better explanation than its competitors.
Punishment, Permissibility, and Justification
Nathan Hanna, Drexel University
I argue that justifying punishment is more complex than many philosophers of punishment realize. Many proposed justifications focus exclusively on arguing that punishment can be permissible. But justifications of punishment must do more than that. They must also argue that punishment can be well-motivated, that punishers need not act culpably or viciously. Some proposed justifications make it especially easy to see the need to argue for this. I focus on one in particular: Kit Wellman’s version rights forfeiture theory. I argue that its exclusive focus on permissibility is a problem. And I argue that other popular justifications of punishment have the same problem.
What's the Point of Understanding?
Michael Hannon, Queen's University
What is human understanding and why should we care about it? In this paper, I propose a method of philosophical investigation called “function-first epistemology” and use this method to investigate the nature and value of understanding. I argue that the concept of understanding serves the practical function of identifying good explainers, which is an important role in the general economy of our concepts. This hypothesis sheds light on a variety of issues in the epistemology of understanding, including the role of explanation in understanding, the relationship between understanding and knowledge, and the value of understanding. I argue that understanding is valuable and yet knowledge plays a more important role in our epistemic life.
Animal Rights Terrorism and Pacifism
Blake Hereth, University of Washington
If animals have robust moral rights, a serious moral quandary arises: Is animal rights terrorism permitted? According to the Terrorist Objection, the answer is a damning “yes.” If animals have robust rights, they have a right to self-defense and a right to defensive assistance, the latter of which implies that others are at the very least permitted to maim or kill in an animal’s defense. Such a conclusion has the extremely counterintuitive implication that it is permissible to maim or kill thousands of animal researchers, zookeepers, puppy mill owners, chefs, farmers, and ranchers. I then consider four possible replies to the Terrorist Objection. The first is that animals have no rights, but that is insufficient to blunt the charge since it seems permissible to defend animals from some unjust threats (e.g., to defend a puppy from being tortured by electricity). The second is to motivate the view that animal researchers, ranchers, etc., are not liable to defensive harm even if animals have robust rights, yet that falsely implies that slaveholders were not liable to defensive harm. The final two possibilities are terrorism and pacifism. The former claims that we are permitted to engage in violent animal rights terrorism, whereas the latter claims that violence is never permitted. I show that the central objections to terrorism and pacifism are equally strong—a result that vindicates pacifism.
Beyond Emotion Versus Reason: Does Neuroscience Undermine a Classic Metaethical Distinction?
Geoffrey Holtzman, Illinois Institute of Technology
Critical reflection on the available neuropsychological evidence suggests that the roles of emotion and reason in moral judgment may not be distinct. This casts significant doubt on our current understanding of moral judgment, and therefore also on all philosophical theories based on that understanding. Most notably, it raises doubts about both sentimentalism and rationalism, which historically have often been treated as exclusive and exhaustive theories regarding the nature of moral concepts. As an alternative, I endorse pluralism with regard to the emotional and rational nature of moral concepts.
Grounding Truth and Making True
Hao Hong, Indiana University Bloomington
Truthmaking is similar to grounding in that they both give rise to some kinds of non-causal metaphysical explanation. Some philosophers therefore think that truthmaking is a species of grounding: when a proposition is made true by its truthmaker, the truth of that proposition is grounded in its truthmaker. I maintain that truthmaking is not a species of grounding. I distinguish truthmaking relation from truthmaking explanation, and argue that a truthmaking relation is not a grounding relation and a truthmaking explanation is not a grounding explanation. In spite of these differences, I further explain why we find truthmaking explanations plausible. I argue that truthmaking explanations are partial grounding explanations that are modally, epistemologically, and pragmatically adequate.
Disjunctivism and the Stream of Consciousness
Justin Humphreys, University of Pittsburgh
Epistemic disjunctivism is the doctrine that there is a fundamental justificatory difference between veridical perception and non-paradigmatic perceptual episodes, such as illusions and hallucinations. Veridical perceptions offer one an indefeasible warrant for believing what one perceives, while defective cases provide no justification for belief. In this paper, I identify the disjunctivist claim as one that follows from the commitments of American pragmatism. I then argue, against some contemporary versions of disjunctivism, that veridical perceptual episodes do not wear their epistemic credentials on their sleeve, that is, that they are phenomenally indistinguishable from illusions and thus provide no special warrants for belief. I finish by drawing on a conception of consciousness as a continuous flow of experience to argue that veridical experiences are practically distinguishable from illusions. This suggests that it is through action rather than introspection that our perceptual experiences gain epistemic status.
Two Ways to Want?
Ethan Jerzak, University of California, Berkeley
I present hitherto unexplored and unaccounted for uses of “wants.” I call them advisory uses, on which information inaccessible to the desirer herself helps determine what it's true to say she wants. I show that extant theories fail to predict it. I also show that they fail to predict true indicative conditionals with “wants” in the consequent. I argue for a relativist semantics, according to which the contents of desires are information-neutral propositions. The truth of a desire attribution, on the view I arrive at, depends on the state of information at the context of assessment. I sketch a pragmatic account of the purpose of desire attributions that explains why it made sense for them to evolve in this way.
The Structure of Implicit Bias
Gabbrielle Johnson, University of California, Los Angeles
Implicit bias, though a popular topic in public and academic spheres, is a subject fraught with confusion, complication, and controversy. Many theories of implicit bias presuppose commitments about the nature and structure of belief, inference, rationality, and association. For this reason, it's difficult to find a model of implicit bias that doesn't beg the question against some philosophical view or other. The purpose of this paper is to explore the options for minimizing these tensions. The results of this exploration is a model of the minimal structure of implicit bias—one that leaves the notion of bias as open-ended and unregimented as the empirical data allow. The upshot to this approach is twofold: having a model of the minimal structure of implicit biases will allow those wanting to prescind from further discussions of the philosophical import of implicit bias to do so with impunity, while allowing those who want to engage in these discussions to do so from a common starting point. The functional account provided in this paper models the input-output patterns of implicit bias, while remaining neutral with respect to the nature of the cognitive mechanisms that underlie these patterns. This allows us to get a basic profile of implicit bias without doing any work under the hood. The major discovery of this account is that the empirical data leave open two options for how implicit biases instantiate the functional account: the transformations between the input-output states could follow an unconscious, but fully represented rule—like a stereotype in the case of explicit bias; or the transformations could follow an unconscious and merely encoded rule---that is, a principle that merely operates as if it has content. Since the empirical data don’t settle this question, it’s critical that we adopt the functional model.
Resolutions, Salient Reasons, and Weakness of Will
Christa Johnson, Ohio State University
After rejecting the traditional view of weakness of will as acting contrary to one’s better judgment, or akrasia, Richard Holton submits that an agent is weak-willed when she unreasonably revises a contrary inclination defeating intention, or resolution. While Holton’s rejection of the traditional view provides insights into weakness of will, I argue that his own view is much too narrow in scope as it fails to capture cases of weakness of will in which neither a resolution nor a future-directed intention is involved. I then present a novel view of weakness of will which maintains that an agent is deemed weak-willed when she acts contrary to her own salient reasons. I defend the view by showing that it is able to capture the insights gathered from rejecting the traditional view as well as Holton’s resolution view.
Reacting to Moral Ignorance: A Discussion of Shame, Blame, and Culpability
Mariam Kazanjian, Indiana University Bloomington
In the literature on moral ignorance, philosophers repeatedly address a particular problematic aspect of Gideon Rosen’s theory. Throughout his work Rosen repeatedly suggests anecdotal examples of morally ignorant agents who don’t feel too badly about their actions in retrospect. After all, they say, the mistakes that they made were blameless. Rosen goes so far as to say we should not blame them, even when their moral ignorance causes egregious damage to others. Clearly, this does not match our normal responses to those that have wronged us, and Rosen acknowledges as much. He even suggests a Strawsonian objection his own thesis: that even if these people aren’t blameworthy, we still want to blame them. Consequently, the moral ignorance thesis won’t accommodate normal reactive attitudes. Attempting to resolve this issue, I will argue that there is an attitude we might hold toward morally ignorant people, or that they might hold about themselves, which will satisfy our Strawsonian intuitions and still preserve Rosen’s thesis. A philosophical exploration of shame will reconcile the two apparently competing theories.
Is Blameworthiness Forever?
Andrew Khoury, Arizona State University
Benjamin Matheson, University of Gothenburg
Many of those working on moral responsibility assume that "once blameworthy, always blameworthy." That is, they believe that blameworthiness is forever. We argue that blameworthiness is not forever; rather, it can diminish through time. We begin by showing that the view that blameworthiness is forever is best understood as the claim that personal identity is sufficient for diachronic blameworthiness. We argue that this view should be rejected because it entails that blameworthiness for past action is completely divorced from the distinctive psychological features of the person at the later time. This is because on none of the leading accounts of personal identity does identity require the preservation of any distinctive psychological features, but merely requires some form of continuity. The claim that blameworthiness is forever should therefore be rejected. We then sketch an account of blameworthiness over time that serves a model for constructing more developed accounts of diachronic blameworthiness.
The Epistemic Structure of Economic Models and the Problem of Confirrmation
Jinsook Kim, Seoul National University
In this paper, we investigate the problem of confirmation on the economic theories with a particular focus on the Market Selection Theory (MST). Ever since Hempel's (1943, 1945) early studies, most philosophical literature on confirmation has been devoted to the natural science model where the scientist, i.e., an outside observer, does not interfere in the model itself she is observing. We extend the scope of discussion to the economics models. We suggest that confirmation on economics model has a distinctive epistemic structure and further argue that due to this structure the MST is not directly confirmable with data. We construct a Bayesian confirmation model to prove this.
Phenomenology and Metaphysics in Being and Time
James Kinkaid, Boston University
Interpreters of Being and Time are divided on how to understand the central concept of the book—namely, being. According to Steven G. Crowell and Thomas Sheehan, Heidegger performs a phenomenological reduction from being to meaning; Heidegger uses Sein to refer to meaning (Sinn). In contrast, Kris McDaniel and Howard D. Kelly defend interpretations on which Being and Time is robustly metaphysical. I argue that both interpretations get something right. In Husserlian terms, Being and Time is a work of revisionary formal and regional ontology and of constitutive phenomenology. To determine the “meaning of being” is to give an account of how ontological knowledge is possible. This is to give an analysis of the intentional structures in virtue of which the formal categories and regional essences of entities can be given—that is, an existential analytic or ontology of Dasein. Being and Time thus “provide[s] ontological grounds for ontology” (487).
Social Interactions, Aristotelian Powers, and the Ontology of the I-You Relation
James Kintz, Saint Louis University
While there has been much promising work on the second-person in the philosophy of mind, very little has been said concerning the ontological nature of the second-person relation. Yet if those defending a second-person approach to intersubjectivity are correct that this is a unique type of relation, then we need a more precise account of what makes this relation distinct. In this paper I seek to provide an ontological analysis of the I-You relation. I will develop my account by employing an Aristotelian powers ontology, arguing that an I-You relation forms as a result of the activation of ontologically interdependent social powers. Moreover, I will suggest that the I-You relation is both bidirectional and dynamic. By offering an Aristotelian analysis of the I-You relation I suggest that we gain greater clarity on the nature and importance of this relation.
Heidegger as a Nazi Bureaucrat: An Archival Report
Adam Knowles, Drexel University
What kind of bureaucrat was Martin Heidegger? Drawing on the archival record, this paper argues that as Rector of Freiburg University from 1933-34 Heidegger was a skilled Nazi bureaucrat whose primary achievement was to swiftly “Aryanize” Freiburg University. Depicting Heidegger’s bureaucratic practices as Rector and his skill at negotiating the cultural politics of National Socialism after the Rectorate, this paper raises two questions: Why has the literature overlooked Heidegger’s bureaucratic practice and focused primarily on his intellectual relationship to Nazism? What does it mean that Heidegger could align his thinking so thoroughly with the violent bureaucratic practices of a totalitarian regime?
“Non-Idealizing Abstraction” as Ideology: Nonideal Theory and the Power Dynamics of Oppression
Youjin Kong, Michigan State University
Recently, social and political philosophers have shown an increased interest in the ideological nature of ideal theory and the importance of nonideal theory. Charles Mills, who sparked recent critiques of ideal theory, invokes the notion of “non-idealizing abstractions” and argues that these are helpful when doing nonideal theory. In contrast, I argue that Mills’s notion of non-idealizing abstractions is not a very helpful tool for doing nonideal theory. I suspect that the concept pays insufficient attention to the power dynamics of oppression. These dynamics not only make it difficult to distinguish non-idealizing abstractions from “idealizing abstractions,” but also significantly affect judgments about whose experiences and interests are worth being reflected by an abstraction. By ignoring the actual power dynamics of oppression and power differentials among the oppressed, what Mills takes to be non-idealizing abstractions falls into ideology, which cannot reflect the experiences or interests of less-privileged minorities, but only those of more-privileged minorities. In order to minimize the risk of falling into ideology when doing nonideal theory, I suggest that one should avoid asserting that an abstraction is “non-idealizing,” and should instead protect the “resignifiability” of any abstraction.
Constitution, Dependence, And Mereological Hylomorphism
David Kovacs, Bilkent Universitesi
Constitution is the relation that holds between an object and what it’s made of: statues are constituted by the lumps of matter they coincide with; flags, one may think, are constituted by colored pieces of cloth; and perhaps human persons are constituted by biological organisms. Constitution is often thought to be a dependence relation. In this paper, I will argue that given some plausible theses about ontological dependence, most definitions of constitution don’t allow us to retain this popular doctrine. The best option for those who want to maintain that constitution is a dependence relation is to endorse a kind of mereological hylomorphism: constituted objects have their constituters as proper parts, along with a form, which is another proper part. The upshot is that constitution theorists who think of constitution as a dependence relation but are reluctant to endorse mereological hylomorphism ought to give up one of their commitments.
A People's Legitimacy and the Qualified Right to Exclude
Jonathan Kwan, The Graduate Center, CUNY
Justifications for border controls and immigration restrictions face a hard theoretical question of democracy—namely, the boundary problem: If democracy is rule by the people, who should be a member of the people? Appealing to democratic decision-making procedures cannot solve this problem but rather raises an infinite regress as there will always need to be a prior people deciding who is to be a member of the people doing the deciding, and so on and so forth. I give an account of a people’s right to control borders and exclude others that addresses head on the boundary problem and questions about a people’s legitimacy. My thesis is that a people has a qualified right to exclude others when it is legitimate. Avoiding an infinite regress requires that the legitimacy of a people appeals not to democratic procedures but to the substantive principles of self-determination and justice that underlie those procedures. A people will have the qualified right to exclude when it meets the following three conditions for legitimacy derived from such substantive principles: a) it does not exclude any resident engaging in the common activities within its territory, b) it respects the self-determination of those it constitutes as outsiders—both other peoples and refugees (people-less persons), and c) it does not exclude anyone where doing so violates justice, especially its global justice duties to help ensure that all humans have the opportunity to live a decent life. These conditions for a people’s legitimacy also determine the scope and extent of a peoples’ right to exclude since they act as constraints on such a right. My position thus strikes a midway point between open border proponents and strong sovereigntists who defend the unqualified right to exclude outsiders while being theoretically sound in providing a principled answer to the boundary problem.
Anatomy of the Thigh Gap
Celine Leboeuf, Florida International University
The thigh gap, which is the space some women have between their legs when they stand with their feet together, has become a fetishized body part and badge of slenderness. My paper aims to identify what is wrong with the thigh gap obsession and other structurally similar obsessions, and to suggest a way to overcome them. I argue that the relation women in the grip of this obsession have to their bodies is an instance of bodily alienation. I further argue that the best response to this obsession lies not in broadening our beauty standards, but in cultivating a certain sensualism, a notion that I develop on the model of Karl Marx’s conception of unalienated labor. My critique of the thigh gap obsession both explains its origins and discusses how it offers an illuminating example of the social construction of the body.
Jason Leddington, Bucknell University
That sounds are caused by their sources is pre-theoretical common sense. Thus, we speak of sounds as “made” or “produced” by everyday objects and events such as telephones and collisions. But this poses a prima facie problem for those of us who, following Locke, favor a Property View of Sounds. After all, if sounds are properties, then presumably they are properties of their sources; but then sounds are caused by the very things they qualify, and we do not usually think of properties as caused by their bearers. The intuition that sounds are caused by—and so do not qualify—their sources is, I think, widely-shared (if seldom articulated). This article contests the intuition: I argue that it is perfectly plausible to treat sounds as both effects and properties of their sources. In fact, as it turns out, we regularly (and pre-theoretically) do the same thing for colors!
Substitution and Kenosis
Chungsoo Lee, EEO 21, LLC
Levinas’s key terms found in Otherwise Than Being or Beyond Essence, such as “substitution,” “hostage,” the self “emptying itself,” “recurrence,” “incarnation,” are explicated in order to see the feasibility of their application in Christian theology, more specifically, in terms of understanding the substitutional death of Christ and the liturgy of the Eucharist, on one hand; and to determine if the terms found in Levinas can entirely escape the theological meanings embedded in those religious terms Levinas borrows, on the other. I will try to show that in Levinas the self is Christ. Citing Paul’s use of the term “kenosis” in his Letter to the Philippians, I try to suggest at the end that Christ’s incarnation and passion exemplifies and actualizes Levinasian ethics, which, I further propose, is concretized in the liturgy of the Eucharist, where we too are offered as gifts to the world.
A Causal Modeling Semantics of Indicative and Counterfactual Conditionals
Kok Yong Lee, National Chung Cheng University
Duen-Min Deng, National Taiwan University
In this paper, we construct a novel causal modeling semantics of conditionals. Our semantics differs from the orthodox view in that it predicts that indicative and counterfactual conditionals are two different kinds of conditionals. More precisely, we formulate two types of causal manipulation, i.e., intervention and extrapolation. We argue that indicative and counterfactual conditionals are generated respectively by extrapolation and intervention.
The Bayesian Challenge to Faith
Matthew Lee, Berry College
I present a new ethical challenge to propositional faith. I argue that a key component of an ethics of faith will be a requirement that the actions one’s faith disposes one to perform should (at least in general) respect the Bayesian requirement to maximize expected value. But since virtuous agents do not fulfill ethical requirements by mere dumb luck, it would seem that virtue requires Bayesian practical reasoning. Such reasoning, however, does not use first-order propositions as premises; it employs higher-order propositions about the probabilities of first-order propositions. But I argue that an agent who has faith that p uses p itself, rather than higher-order propositions about p’s probability, as a premise in deliberation. The (apparent) result is that propositional faith is incompatible with what is required for virtue. I end by proposing a response to this challenge, modeled on Jacob Ross and Mark Schroeder’s response to a related problem.
Through the Mirror: The Account of Other Minds in Chinese Yogacara Buddhism
Jingjing Li, McGill University
This paper investigates the theory of other minds provided by Chinese Yogacarins Xuanzang and Kuiji. Advancing the current status quo in existing Yogacara scholarship that equate our knowledge of others as projection and as reproduction. I argue for understanding the Yogacara account of other minds as revelation through which Yogacarins describe how the invisible embodied experience of others disclose itself through the second-person and third-person perspectives. This description further yields the metaphysical explication of no-self and the prescription of norms for compassionate actions. These three levels (descriptive/epistemic, explicative/metaphysic, and prescriptive/ethic) constitute the Yogacara phenomenology of other minds.
Why Is Rationality Morally but Not Epistemically Permissive?
Han Li, Brown University
Bradford Saad, University of Texas at Austin
Morality is intrapersonally permissive: under some circumstances an agent has more than one morally permitted option. In contrast, epistemic rationality is (plausibly) not intrapersonally permissive: (plausibly) there are no cases in which an agent has more than one epistemically permitted response to her evidence. This disanalogy between morality and epistemology calls out for explanation. The paper's task is to answer that call. I proceed by considering two types of permissive case - cases of ties and cases of supererogation - and explaining why they have moral instances but not epistemic instances.
Attitudinal and Phenomenological Theories of Pleasure
Eden Lin, Ohio State University
On phenomenological theories of pleasure, what makes an experience a pleasure is something about what it is like or the way it feels: pleasures are pleasures in virtue of possessing a certain kind of phenomenology. On attitudinal theories, what makes an experience a pleasure is something about its relationship to the favorable attitudes of the subject who is having that experience: a particular experience is a pleasure in virtue of being, say, liked or desired by the subject who is having it, or in virtue of consisting of that subject’s liking or desiring something else. I advance the debate between these theories in two ways. First, I argue that the main objection to phenomenological theories, the heterogeneity problem, is not compelling. While others have argued for this before, I identify an especially serious version of this problem that resists existing solutions, and I explain why even this version of the problem does not undermine phenomenological theories. Second, I argue that a grand reconciliation can be effected between the two types of theory: it can be true both that pleasures are pleasures in virtue of how they feel and that they are pleasures in virtue of how they are related to their subjects’ favorable attitudes, so long as the attitudes that are constitutively related to pleasures are ones that feel a certain way. Hybrid views of this sort have significant advantages over pure attitudinal or phenomenological views.
In Defense of Moderate Pluralism About Ground
Jon Litland, University of Texas at Austin
A moderate pluralist about ground holds that there are three irreducible notions of ground—metaphysical, normative, and natural. Berker has recently argued that the pluralist cannot account for certain mixed transitivity and asymmetry principles, and that, accordingly, we should be monists about ground. I defend pluralism by showing how to account for these mixed principles.
Metastability and Truth Transmission
Shay Logan, North Carolina State University
A top down search for logics seems doomed, because we can't trust the very logics we'd be measuring our arguments against. We're left to pursue a bottom up approach. One way to pursue a bottom up approach to finding logics is to search for broad classes of candidate logics that are V-metaunstable with respect to logical virtues V that we take seriously. This paper outlined a general way to actually accomplish this type of bottom up strategy.
Imprints in Time: a Moderately Robust Past
Michael Longenecker, University of Notre Dame
Here are two important intuitions about time: (i) tensed truths must be grounded and (ii) there is genuine change. These, however, seem to push in opposite directions: the first pushes us to incorporate past objects with features robust enough to ground past-tensed truths, while the second requires that they not be too robust. I don’t think extant views balance these intuitions in a satisfying way. My aim is to do better. General Relativity tells us, roughly, that the presence of mass-energy determines the curvature of spacetime. On this basis, the view I develop tells us that the past consists of curved spacetime regions devoid of mass-energy. This allows us to say that what exists—such as dinosaurs—genuinely changes, nevertheless the truth of “dinosaurs existed” is grounded in the curvature of the past. The curvature of the past is the imprint mass-energy leaves on time.
Indexical Relativism Reconsidered
Bradley Loveall, Georgia State University
In this paper, I defend the metaethical thesis known as indexical relativism. Indexical relativism is the thesis that the propositional content of a moral sentence is indexed to the context in which that sentence is believed or uttered. Indexical relativism has long been thought to be undermined by the problem of disagreement since it allows moral disagreement between two disputants without requiring that the propositions uttered by the disputants contain exclusionary content. Recent research by Justin Khoo and Joshua Knobe suggests, however, that indexical relativism coheres quite nicely with folk attitudes about moral disagreement. However, if the manner in which indexical relativism treats disagreement is a virtue rather than a vice, indexical relativism should be poised to attract renewed attention. The time is right, therefore, to reconsider the theory. This paper serves as a reintroduction and defense of a theory that is overdue for a fair shake.
“The Sovereigns of the Empire of Conversation”: Hume on Women
Getty Lustila, Boston University
Many early modern philosophers claim that women are not mature moral agents—they are thought to be unduly compassionate and prone to flights of fancy. Hume argues that women’s increased compassion and imagination affords them greater sensitivity to the passions of others, and makes them “Sovereigns of the Empire of Conversation.” Hume views the conversable world as the cradle of sociability, and views women’s graceful movement through it as an indication of their having a “delicacy of taste,” a capacity that is central to moral judgment. For Hume, the person of taste, the polite person, and the moral person are the same person. I argue that the highest expression of this person, for Hume, is a properly educated woman. In this manner, Hume is more progressive on gender issues than many of his contemporaries. My aim is to understand why Hume gives women a privileged position in his moral theory.
Equal Rights for Zombies? Phenomenal Consciousness and Responsible Agency
Alexander Madva, California State Polytechnic University, Pomona
Intuitively, moral responsibility requires conscious awareness of what one is doing, and why one is doing it, but what kind of awareness is at issue? Levy (2014) argues that phenomenal consciousness—the qualitative feel of conscious sensations—is unnecessary for moral responsibility. He claims that only access consciousness—the state in which information (e.g., from perception or memory) is available to an array of mental systems (e.g., such that an agent can deliberate and act upon that information)—is relevant to moral responsibility. I argue that a wide class of views entail that the capacity for phenomenal consciousness is necessary for moral responsibility. I focus in particular on considerations inspired by Strawson (1962), who puts a range of qualitative moral emotions—the reactive attitudes—front and center in the analysis of moral responsibility.
Intellectual Virtues and Biased Understanding
Andrei Marasoiu, University of Virginia
On a prominent view, to understand something necessarily involves exercising intellectual virtues (Zagzebski 2001), conceived broadly so as to include global traits of cognitive character like open-mindedness, but also various more specialized cognitive skills (Sosa 2007). Adam Carter and Pritchard (2016) raise the following skeptical objection: Most of us are biased. Even if our knowledge is due solely to intellectual virtues, that is sheer luck because it might have easily been due to intervening cognitive biases. However, the objection goes, understanding isn't lucky; it is due entirely to intellectual virtues. Since the lucky absence of bias interference is pervasive, it follows that we are, most of the time, mistaken in thinking we genuinely understand. I answer the skeptical objection by distinguishing three moments of understanding, and indicating the role that conscious de-biasing may play as a backdrop in reflection whenever, occasionally, our intellectual virtues might fail us.
The Nature of Reasons: Alienation and the Wrong Kind of Reason
Alex Marmor, Harvard University
A reason is, at its most basic, a consideration that concerns the thing that it is a reason for. But “concerns,” here, is ambiguous. Several attempts have been made to clarify the concerning-relation, and so the nature of reasons. On a standard view, held by T. M. Scanlon and Derek Parfit, a reason is a consideration that counts in favor of an attitude. Pamela Hieronymi identifies two problems for the standard view: what I call the “alienation problem” and what is known as the “wrong-kind-of-reason problem.” In light of these problems, Hieronymi argues that a reason is a consideration that instead bears on a question. In this paper, I offer a clarification of the standard view, which I think solves the problems facing the view. If my argument succeeds, then we will have learned something about the standard view of reasons and the nature of reasons. This is progress.
The Patient's Duty to Disclose
Allison Massof, Ohio State University
Ordinarily we are not morally required to disclose intimate information to others simply because they request it. We are not morally required to do so even if it is important to them. Yet patients are handed disclosure forms that ask them about family health history, substance use, sexual history, and other private matters. The assumption is that the patient has a duty to provide this information to the physician; withholding otherwise private information is impermissible. The AMA Code of Medical Ethics attempts to derive this duty from the patient’s status as an autonomous agent. I argue that this derivation is unsatisfactory, and I offer a competing account. Patients have a duty to disclose a complete medical history because this disclosure corrects for a power imbalance that favors the patient over the physician, one that threatens to undermine the prospects of collaboration between a physician and patient.
Adjusted Subjective Theories of Ill-Being
Eric Mathison, University of Toronto
A theory of well-being explains what makes an individual’s life go better or what contributes intrinsic value to it, while a theory of ill-being explains what makes a life go worse or contributes intrinsic disvalue. In this paper, I discuss “adjusted subjective theories.” Such theories hold that someone is well off to the extent that she has positive mental states, but her welfare is adjusted for some non-subjective component. In contrast, a non-adjusted subjective theory considers only the subjective experiences of the individual. The three components I consider are truth, freedom, and quality. I argue that, regardless of their plausibility as theories of well-being, none is plausible as an account of ill-being. This result points to a consistent asymmetry between the good and the bad.
Mental Illness as Inadaptivity
Laura Matthews, University of Georgia
This paper explores an enactive approach to mental illness by equating it to a failure of adaptivity. Adaptivity refers to the embodied mind’s ability to alter its behavior in order to place itself more firmly within the realm of its viable conditions. Mental illness as inadptivity in human beings can operate in one of three ways: (1) by effectively limiting the range of activities that an individual can undertake (without any physiological source for the limitation), i.e., a narrowing of one’s world, (2) by creating an irreconcilability between the individual’s subjective experience and the objective world, which should be more or less accurately tracked by subjective experience, and (3) by creating a gulf between the individual’s personal world and the world of interpersonal relations and intersubjectively valid discourse.
Biology's 2nd and 3rd Laws
James Mattingly, Georgetown University
I extend Brandon’s version of the analogy between evolutionary biology and Newtonian physics by showing that analogues of the 2nd and 3rd Laws of motion are necessary to make the analogy function, and then showing that Natural Selection and the Hardy-Weinberg law respectively fill these roles. This work will help to clarify debates about nature and status of laws in biology.
Self-Blame and Sexual Violence: A Feminist Intervention
Amy McKiernan, Vanderbilt University
In this paper, I engage with the contemporary philosophical literature on blame to differentiate between morally appropriate and inappropriate deployments of blame for the purpose of interrupting pervasive patterns of self-blame for sexual assault and relationship violence. The first section of this paper evaluates contemporary accounts of blame to see what they have to say about self-blame. The second section considers cases of self-blame under oppressive conditions, specifically the pervasiveness of (1) self-blame following sexual assault and (2) self-blame for remaining in an abusive relationship. In the third section, I find it helpful to introduce some terminology offered by Bernard Williams in “Internal and External Reasons” (1981) and “Internal Reasons and the Obscurity of Blame” (1995). I borrow the terminology of internal and external reasons from Williams insofar as I want to stress the difference between blaming oneself for failing on the basis of reasons of one’s own and blaming oneself for failing on the basis of external reasons. Further, it looks as though we may need more than the distinction between internal and external reasons to capture what we might understand as “internalized” reasons, or reasons that are internal, but somehow alien, to our other beliefs, desires, attitudes, and projects. In an attempt to begin to understand how to identify and eradicate these internalized but alien oppressive reasons, I conclude this section with a discussion of Sandra Bartky’s work on feminist consciousness raising as this process relates to critical self-reflection.
Talking Ourselves Senseless
Daniel Mendez, Boston University
I argue that we linguistic practicioners can dissolve our linguistic practices by participating in them. A linguistic practice is not ongoing if there is nothing anyone can do that would make it appropriate to attribute to them a commitment, nor if there is nobody who is appropriately counted as having any commitments. I show that we can, by making moves that have a proper significance within our linguistic practices, make it so that nothing we can do would make it appropriate to attribute to us commitments, and that we can, through our linguistic practice, make it so that it is not appropriate to attribute to anybody any commitment. I connect these rather abstract arguments with extant work on epistemic injustice and with recent political developments. If I am right on these points, then it is possible for us to dissolve our linguistic practices from within in several ways. I take these arguments to show that even if, as teleosemantic approaches to language indicate, a natural-historical account of our linguistic practices, and meaning in general, can be given that is not fundamentally different from the sort of accounts to be given of the developments of livers and bones, our language now includes inextricably an element of freedom and responsibility that is simply lacking in our livers and bones.
Natural Goodness and Biological Goodness
Parisa Moosavi, University of Toronto
Neo-Aristotelian ethical naturalism appeals to the “natural good” of living organisms to show that moral virtue is an instance of natural goodness. Opponents of this view object that the neo-Aristotelian concept of natural goodness has no place in biology and cannot be reduced to proper biological functioning. In response, neo-Aristotelians deny that their concept of function has anything to do with the biological concept of function—a response that seems to undermine the naturalistic credentials of their view. In this paper, I argue that a reduction to biological functioning is not necessary to place natural goodness metaphysically on a par with biological functions. I appeal to an organizational account of biological functions to argue that functional ascriptions in biology presuppose the same kind of normativity as evaluations of natural goodness: they both presuppose intrinsic organismic normativity, which is enough to put them metaphysically on a par.
The Irrelevance of Harm for Disease
Dane Muckler, Saint Louis University
A major position on the nature of disease, normativism, holds that there is a close conceptual link between disease and disvalue. I challenge normativism by advancing an argument against a recent and popular normativist theory, Jerome Wakefield’s harmful dysfunction account. Wakefield maintains that medical disorders are breakdowns in evolved mechanisms (dysfunctions) that cause significant harm to the organism. I argue that Wakefield’s account is not a promising way to distinguish between disease and health, because being harmful is neither necessary nor sufficient for a dysfunction to be a disorder. I begin by considering counterexamples to the harmful dysfunction account, such as mild infections, perceptual deficits, and illnesses that yield a net benefit. These cases show that being harmful is neither necessary nor sufficient for a dysfunction to be a disorder. I consider two ways of amending the harmful dysfunction account to address these cases and argue that the proposed amendments even raise more serious problems for the harmful dysfunction account. I suggest that these problems will apply generally to any normativist theory and raise doubts about the entire normative approach to the philosophy of health and disease.
Daniel Munoz, Massachusetts Institute of Technology
Contingent negative existential facts—like the fact that there are no ghosts—must be grounded, but there are no facts that can ground them. Though these claims appear inconsistent, I argue that both are true. Negative existentials are contingently zero-grounded, or “grounded by default,” in the sense that they are generated from zero-many facts, just as the empty set is generated from zero-many urelements. But unlike the empty set, the contingent facts of nonexistence will fail to be generated in certain possible worlds—namely, those where there exist counterinstances. One consequence of this proposal is that we can understand the distinction between positive and negative facts in terms of grounding: the grounds of a positive fact are the disablers of its negation’s zero-ground. A more disturbing consequence is that there can be no complete, fundamental account of reality.
Bodily Perceptual Justification
Daniel Munro, University of Toronto
I argue that, although it is intuitive to assume passive bodily perception follows a similar epistemic structure to the paradigm set by visual perception, there is also an important asymmetry between these two modalities. The intuitive view is that both visual and bodily perception yield immediately justified beliefs about their content. After introducing notions I call "fine-grained" and "coarse-grained" descriptions of perceptual content, I show that this symmetry is, more precisely, in the fact that fine-grained perceptual content immediately justifies coarse-grained beliefs. However, there is also a dimension of asymmetry: while visual perceptual experiences can serve as (non-immediate) justification for fine-grained beliefs, empirical evidence shows that bodily perceptual experiences cannot. This asymmetry makes sense given the respective functional roles of vision and bodily sensations, and may be important for developing a general theory of the epistemology of perception.
Mapping the Future. Fictions, Predictions, and Forecast Models
Ioan Muntean, University of North Carolina at Asheville
This paper argues for a specific role of fictional structures (as opposed to fictional objects) in forecast models as used in science. The conclusion is that fictions play a different, albeit important role, in forecast models, given the restricted access we have to the target space of forecast models. The focus is on the structure of the target space (Frigg), and on the non-mimetic aspect of fictions in forecast models. Fictional structures serve a better role in forecast models than hypotheses, which are typically expressed in first order language. The paper emphasizes the radical difference between representing future states of a system, and representing its past states. Mapping the future states of a system depends on assumptions which the modelers need to make explicit. The paper does not discount therefore the difference in epistemic access between past present and future when it comes to models.This paper argues for a specific role of fictional structures (as opposed to fictional objects) in forecast models as used in science. The conclusion is that fictions play a different, albeit important role, in forecast models, given the restricted access we have to the target space of forecast models. The focus is on the structure of the target space (Frigg), and on the non-mimetic aspect of fictions in forecast models. Fictional structures serve a better role in forecast models than hypotheses, which are typically expressed in first order language. The paper emphasizes the radical difference between representing future states of a system, and representing its past states. Mapping the future states of a system depends on assumptions which the modelers need to make explicit. The paper does not discount therefore the difference in epistemic access between past present and future when it comes to models.
Daniel Murphy, State University of New York College at Cortland
For some kinds K1 and K2 of features, some think there are, or might have been, entities exactly alike in K1 respects but not in K2 ones. For example, consider a world containing a pair of spatially separated, perfectly similar, lonely spheres. The spheres would be exactly alike in qualitative but not non-qualitative respects. Or consider a statue-shaped mass. Some think this thing’s location contains a second statue-shaped thing, a statue, distinct from the mass by virtue of (inter alia) its modal features. These things would be exactly alike in micro-physical but not modal respects. Let’s say a world with entities exactly alike in K1 but not K2 respects would exhibit symmetry-breaking vis-à-vis K1 and K2 goings-on. Intuitively, we would have entities that are symmetric with respect to one kind of feature, but that “break” symmetry with respect to another kind instead of maintaining symmetry. It’s widely held that, if world w constitutes a possible case of symmetry-breaking vis-à-vis K1 and K2 goings-on (i.e., is possible and exhibits such symmetry-breaking), then K2 goings-on are irreducible to K1 ones, in that at least some K2 goings-on aren’t determined by any K1 ones in w. Were this conditional right, we must either deny that w constitutes a possible case of symmetry-breaking or deny that K2 goings-on are completely determined by K1 ones (or both). But for some of us and some <w, K1, K2>, neither option is attractive. I argue there’s a third option. We can reject the conditional itself, and accordingly say that K2 goings-on are completely determined by K1 ones, even if the relevant world exhibits the relevant symmetry-breaking—a phenomenon I call symmetry-breaking determination. Intuitively, the symmetric K1 goings-on would determine that symmetry breaks, via the addition of the symmetry-breaking K2 goings-on, in the very way it does.
Control and Contrastive Explanations
Jonah Nagashima, University of California, Riverside
Libertarians think that some people actually act freely and that determinism precludes free will. So, libertarians conclude that our world is indeterministic. But objectors say indeterminism also precludes free will. Assume libertarianism, and that you freely raise your hand. Given indeterminism, there's a possible world that's identical to the actual world (until some suitably prior time) where you refrain from raising your hand. What explains why you raised your hand rather than not? Says the objector, there's nothing available to explain why. So, it's just a matter of luck, and luck precludes freedom. This paper grants that the failure to provide such a contrastive explanation indicates an important kind of loss of control, but denies that it lowers control to freedom-precluding levels. I argue that this line of reply best accommodates the intuitive force of the luck objection, avoids substantive metaphysical commitments, and is compatible with incompatibilist conclusions about control.
A Kripke-Style Solution to the Liar Paradox
Jay Newhard, East Carolina University
The main purpose of this paper is to present a Kripke-style formal solution (KSS) to the Liar Paradox along with some philosophical motivation for it. The formal solution is a modified version of Kripke’s well-known formal theory of truth. The solution presented here avoids the ghost of the Tarski hierarchy by showing that no ascent to a metalanguage is required when alethically evaluating the proposition expressed by a Liar Sentence.
Ambiguous Places: A Case for the Everyday Sublime
Ariane Nomikos, University at Buffalo
This paper is an attempt to capture the experience of the aesthetic character of what I call ambiguous places—places marked by both familiarity and unfamiliarity. I take Arto Haapala’s existential account of the aesthetics of everyday life as my starting point. I suggest that the experience of strangeness (characteristic of, but not unique to, unfamiliar places) can give rise to experiences of the sublime, and extend this analysis to the aesthetics of everyday life in order to make the case for a concept that at first may seem paradoxical—the everyday sublime.
Language Loss and Illocutionary Silencing
Ethan Nowak, University College London
The twenty-first century will witness a historic decline in the diversity of the world’s human languages. While I imagine most philosophers would agree that this is a lamentable state of affairs, little has been said about what exactly is lost with a language. Adapting a thread from the feminist philosophy of language, I argue that language loss constitutes a form of illocutionary silencing. When a language disappears, past and present speakers lose the ability to speak now and to posterity in their own distinctive voices.
Kant's Conception of Pure General Logic: a Reply to MacFarlane
Tyke Nunez, Washington University in St. Louis
In this essay I discuss two connected features of Kant's pure general logic: (i) that this logic provides prescriptions for all thinking whatsoever, and (ii) that this logic abstracts away from the relations its representations stand in to objects. In his "Frege, Kant, and the Logic of Logicism," John MacFarlane argues that (i) is fundamental, and (ii) follows from (i), given some of Kant's ancillary commitments. In contrast, I argue that as MacFarlane understands both (i) and (ii) they follow from a more basic characterization of pure general logic, and that this is sufficient to block the main argument of MacFarlane's essay, which intends to establish enough of a shared conception of logic to adjudicate Kant's and Frege's dispute over the plausibility of logicism.
Epistemic Asymmetry and the Role of Inner Speech in Self-Knowledge
Jordan Ochs, University of Connecticut
Some recent accounts of self-knowledge take the knowledge that we have of our states of mind to be a matter of observation and inference. One major disadvantage of this strategy is that it fails to capture the commonsense idea that, as a subject of mental states, I am in a unique and privileged position to have knowledge of those states. It is part of this idea, that there is a fundamental epistemic asymmetry between myself and others when it comes to knowledge of my states of mind. Alex Byrne attempts to preserve asymmetry by appealing to the fact that we enjoy access to our own inner speech episodes (ISEs). However, I argue that the asymmetry yielded on his account turns out to be only contingent. I offer desiderata for an account of the role of inner speech in self-knowledge that aims to better accommodate the uniquely first-personal character of self-knowledge.
Skeptical Theism and the Paradox of Evil
Luis Oliveira, University of Houston
Given plausible assumptions about the nature of evidence and undercutting defeat, it has seemed to many that the force of the Evidential Problem of Evil depends on Skeptical Theism being false. I think this dialectic is mistaken. In this paper, I argue that there is a way of understanding the Evidential Problem of Evil where it is compatible with Skeptical Theism. I suggest a way of defending William Rowe's famous 1979 argument that makes it depend on the evidential support provided by the collection of instances of apparently pointless suffering in a way that is compatible with each particular instance failing to provide any support at all. I call this result The Paradox of Evil.
The Phenomenal Sharpness Argument
Joshua O'Rourke, Princeton University
It can never be vague whether a given thing is conscious. The lights are either on, or they are off. This means that any physicalist theory of consciousness must identify the property of "being conscious" with a physical property that has no actual or possible vague cases. However, most physicalist theories fail to satisfy this condition because they use inexact concepts to pick out the physical property that is supposed to be identical to consciousness. These considerations give us the following argument schema. Premise 1: It cannot be a vague matter whether something is conscious. Premise 2: Theory X identifies being conscious with possessing a property that has vague cases. Conclusion: Theory X is false. In this paper, I argue that this generic argument can be applied to most of the physicalist theories of consciousness that have been offered so far.
Yes, We Are Luminous
Donald Page, Saint Louis University
Srinivasan (2013) has offered a defense of a controversial margin-for-error premise that figures in Williamson's (2000) Anti-Luminosity argument. She argues that the premise follows from a safety condition on knowledge together with what she considers to be “a plausible empirical hypothesis about the kind of creatures we are—creatures, namely, whose beliefs are structured by certain kinds of dispositions.” I will argue that Srinivasan’s defense of the argument fails because the empirical hypothesis she appeals to is false for non-gradable conditions and implausible for some gradable conditions. Section 2 articulates Williamson’s anti-luminosity argument and Srinivasan’s defense of it. Section 3 raises objections to the empirical hypothesis that Srinivasan utilizes in her defense. I argue that there are phenomenal conditions that are counterexamples to her empirical claims, and thereby luminous. Accordingly, there is good reason to think that, contrary to what Williamson and Srinivasan claim, we are luminous after all.
Kantian Agents and Their Significant Others
Nataliya Palatnik, University of Wisconsin–Milwaukee
Critics of Kant’s moral philosophy often object that his emphasis on individual autonomy makes him unable to account for our “second-personal” or “bipolar” duties. These are duties we owe to other people rather than duties we have with respect to them – as we might have duties with respect to the environment or works of art. I consider a novel version of this objection raised by Michael Thompson, which connects the concern about Kantian treatments of bipolar normativity to an emptiness charge. I show that Kant can answer this objection by considering his conception of moral agency within a larger context of the systematic structure of his moral theory. In particular, I argue that Thompson’s objection fails to get off the ground because it is raised from a perspective that is incompatible with Kant’s inherently practical approach to moral philosophy.
“We Know Nothing About Her”: Hortense Spillers’s “Ungendering” and Frantz Fanon’s Unfinished Argument in Black Skin, White Masks
William Paris, Pennsylvania State University
Fanon’s infamous claim that “we know nothing about” Black women and their psychic suffering under colonialism, when reformulated through Hortense Spillers’s concept of “ungendering,” articulates an understanding of Black life and Black suffering that requires a radically different foundation than what has been provided by psychoanalysis. Fanon and Spillers show that the violence of colonialism and slavery is the grounding of an antagonism between Blackness and being. This antagonism was constructed by making Black women and births an impossible foundation for Black life. Through Fanon and Spillers the paradigmatically unfinished work of Black life will be addressed.
Sidgwick’s Critique of Deontology: Scrupulous Fairness or Serpent-Windings?
Tyler Paytas, Australian Catholic University
Although Sidgwick has long been admired for his impartial and objective approach to philosophical investigation, there has recently been a wave of challenges to his reputation for intellectual honesty. David Phillips (2011) and Thomas Hurka (2014) allege that Sidgwick applied a double standard in arguing against deontology and for consequentialism. While Sidgwick rejected deontology because it fails to meet his criteria for epistemic justification and practical guidance, he purportedly did not test his own favored principles by the same standards. I argue that the unfairness objection is based on a misunderstanding of Sidgwick’s overall case for the superiority of consequentialism. While it is true that Sidgwick’s consequentialist axioms do not fare perfectly against his four tests for “highest certainty,” the application of these tests is not all-or-nothing. The fact that the axioms pass the first and second tests gives them a distinct advantage over deontological principles that fail all four.
Cognitive Disability and the Space for Moral Standing: Foot’s Answer to Aristotle
Alexandra Peabody, University of California, Los Angeles
Aristotelian natural hierarchies are notoriously exclusionary. One group of human beings for whom there seems to be neither any possibility for inclusion within the sphere of moral standing, nor the potential for a well-lived life, is cognitively disabled human beings. In this paper, I read Philippa Foot’s neo-Aristotelianism in Natural Goodness as an improvement upon Aristotle’s hierarchies in that she creates the space for the moral personhood of cognitively disabled individuals. She successfully separates the question of moral standing from those of eudaimonia and moral goodness in a way that Aristotle was unable to. However, I argue that her reliance on natural ontologies and practical reason as a status-conferring attribute ultimately renders her account of human life as only slightly less problematic than Aristotle’s. Cognitively disabled individuals, by Foot’s read, are necessarily “defective,” and this language is in itself cringe-worthy and degrading to individuals living with cognitive disabilities. The main tension arises because Foot seems to omit those lacking rational capacities from her moral landscape. She cannot account for the possibility of a well-lived non-neurotypical life, despite the moral impetus for neurotypical individuals to somehow provide this. In this vein, she fails to recognize the moral solicitude brought on by those lacking rational wills, and this category extends well beyond those with cognitive disabilities. So, while Foot successfully makes strides by unraveling the questions of moral standing from that of moral goodness, she ultimately falls short of shedding the exclusionary framework characteristic of Aristotelianism.
On the Irrelevance of Indeterminism in Peter Tse's Theory of Free Will
Zach Peck, Georgia State University
Many philosophers and scientists have suggested that indeterminism is a necessary condition for free will. Those who hold this view are referred to as libertarians. Alternatively, compatibilists argue that free will is compatible with determinism; therefore, indeterminism is not a necessary condition for free will. One commonly cited reason in favor of libertarianism is that determinism precludes the possibility of mental causation. In The Neural Basis of Free Will, Peter Tse describes a three stage model of criterial causation, which he argues provides the mechanistic basis for a libertarian theory of free will. According to Tse, mental causation is a particular type of criterial causation. Although Tse argues that indeterminism is necessary for mental causation, he acknowledges that criterial causation is compatible with determinism. Nevertheless, he maintains that mental causation is incompatible with determinism by insisting that indeterminism is necessary to avoid Jaegwon Kim’s argument against mental causation. Drawing on the work of philosophers who have argued that causal interventionism allows for a compatibilist account of mental causation that can avoid Kim’s challenges, I argue that Tse’s theory of free will does not require indeterminism. Hence, I conclude that Tse’s theory is actually compatibilist and is not libertarian.
The Material vs. Formal A Priori: Kant, Cassirer, and Merleau-Ponty
David Pena Guzman, Johns Hopkins University
This paper sheds light on Maurice Merleau-Ponty’s concept of the material a priori. It argues (i) that this concept is at once an appropriation and a reformation of the concept of the a priori that appears in Kant’s Critique of Pure Reason (1781) and (ii) that Merleau-Ponty developed this concept, at least in part, through his reading of Ernst Cassirer’s works. In Philosophy of Symbolic Forms, Cassirer uses this notion of ‘symbolic pregnancy’ to defend a philosophical interpretation of experience in which the realm of the sensible is not a passive medium that stands in need of formation by the active intellect, but a field already pregnant with sense and form. In Phenomenology of Perception (1945), Merleau-Ponty deploys this Cassirerian notion to bring the Kantian concept of the a priori under the auspices of a materialist philosophy rooted in the primacy of the sensible.
The Austerity Argument for Compatibilism
Garrett Pendergraft, Pepperdine University
The concept of freedom is the mental entity that partially constitutes our thoughts about freedom. A conception of freedom, on the other hand, is a view of what freedom is and what it requires. When we examine important philosophical concepts, such as the concept of freedom, we often do so by asking questions about the corresponding conception. My goal here is to explore a particular question of this type—namely, the question whether freedom requires that we be able to do otherwise, holding fixed the past and the laws of nature. I will argue that the correct conception of freedom does not include this incompatibilist requirement. My argument will depend on an important distinction between relatively “austere” and relatively “opulent” conceptions of a given concept, the relative austerity of the compatibilist conception of freedom, and the claim that we should adopt a presumption of austerity.
Husserl on Parts, Wholes, and Community Membership
Sean Petranovich, Loyola University Chicago
Husserl on Parts, Wholes, and Community Membership Abstract: This paper provides an interpretation of Husserl’s concept of personal community on the basis of his formal theory of parts and wholes. Husserl’s unique ontology of community is first explicated. I then argue that his account of experiences of community from the perspective of membership follow directly from his mereology in a way that has been overlooked in the secondary literature. My specific focus is on experiences of membership in “intimate” communities, appealing to Husserl’s technical notion of intimacy as found in his theory of parts and wholes. I argue that Husserl’s notorious account of some communities understood as person-like or as “personalities of a higher order” is indicative of mereological intimacy, and should not be interpreted according to the colloquial connotations of intimacy.
The Modal Status of Leibniz's Principle of Sufficient Reason
Owen Pikkert, University of Toronto
I consider and reject several arguments that purport to show that Leibniz took the principle of sufficient reason to be a necessary truth. I then advance an argument to show that he was committed to the contingency of the principle.
Expertise and Educating for Excellence: Socrates on Soul Care as Techne in the Laches
Allison Pineros Glasscock, Yale University
Although Plato’s Laches is famous for its discussion of courage, it begins with a question about education. Socrates’ interlocutors want to know what disciplines or pursuits their sons should learn in order to become as good as possible. Socrates argues that only an expert in the care of soul will be able to answer this question. Socrates’ argument for this claim depends on (a) a set of arguments articulating some norms of expertise and (b) the assumption that the interlocutors are deliberating about a matter which is governed by those norms, i.e., that they are deliberating about a matter of expertise. This paper focuses on (b), the assumption that deliberation about how to make people good concerns a matter of expertise. I consider and reject two prima facie plausible explanations for why Socrates and his interlocutors might endorse this assumption: the assumption cannot be explained by the fact that the interlocutors’ deliberation concerns the discipline or expertise of fighting-in-arms, nor can it be explained by appealing to the (here contested) view that virtue itself is an expertise or kind of knowledge. I then propose a third and novel explanation for the assumption. I argue that Socrates makes (and that his interlocutors can and should endorse) this assumption on the grounds that making people virtuous (through the prescription of various disciplines and practices) is itself an expertise. I conclude with a brief discussion of the implications this view has both for Socrates’ demand, later in the dialogue, for an account of courage and for his more general method (in other dialogues) of trying to show that virtue is teachable by showing that it is knowledge.
Action and Luminosity
Juan Pineros Glasscock, Yale University
A central thesis of Anscombe’s Intention (1958) is that it is impossible for an agent to F intentionally without knowing that she is Fing. Although the scholarly consensus for many years was to reject the thesis in light of presumed counterexamples by Davidson (2001b), several scholars have recently argued that attention to aspectual distinctions shows that these counterexamples fail (Small, 2012; Stathopoulos, 2016; Thompson, 2011; Wolfson, 2012). In this paper I offer a new argument against the thesis, one modelled after Williamson’s (2000) anti-luminosity argument. Since this argument relies on general principles about the nature of knowledge, rather than on intuitions about cases, the recent responses that have been given to defuse the force of Davidson’s purported counterexamples are silent against it. What’s more, the argument shows that weaker theses that philosophers have defended connecting other practical entities (e.g., intentions, attempts, basic action, etc.) with knowledge are also false.
The House of the Good: A Special Kind of Cause in the Philebus
John Proios, University of Arizona
An account of Platonic causation is important for understanding Plato’s metaethics: the Form of the Good is causal, and goodness is an essential part of teleological causes. Yet interpretations of Plato’s theorizing about causal explanation focus on a few stock issues: the Phaedo, Forms, anti-materialist arguments, and teleology. I argue that the Philebus offers some new material. The account of goodness and the ranking at the end of the dialogue should be understood in light of a refined notion of causation introduced earlier. Socrates proposes that the possession of either a property or an object is a cause of being in some state for the possessor. The central way that possession is causal is by identity: the possession of A causes B because possession of A is B. This allows Socrates to run together causal relations and identity relations: he finds the causes of goodness by finding what goodness is.
Anger: Scary Good
Samuel Reis-Dennis, University of North Carolina at Chapel Hill
I argue that by over-rationalizing the moral “conversation,” “exchange,” or “protest” that follows transgression, recent blame theory has failed to explain why full-blooded, painful, expressions of anger and resentment are better “conversation starters” or means of “moral protest” than sadness, disappointment, or stone-cold reasoning. This paper attempts to fill that gap. I contend that, characteristically, anger is effective because it is scary; its connection to action and (sometimes violent) threat allows those who employ it to stand up for themselves, establish or reestablish social standing and self-respect, and bring transgressors back into the moral fold. I conclude that we must leave space in moral life for the expression of angry, scary blame. Life without a practice that allows us to express these angry attitudes, in addition to being alien, would be impoverished, depriving us of the ability to fight for our relationships and respect with dignity and authenticity.
Explanation as a Cluster Concept
Collin Rice, Bryn Mawr College
Yasha Rohwer, Oregon Institute of Technology
In this paper, we argue that scientific explanation is a cluster concept. We contend that there are multiple subsets of features that are sufficient for providing an explanation, but no single feature is necessary for all explanations. Reconceiving of explanation as a cluster concept allows us to maintain a unified concept of explanation while still recognizing the myriad ways scientists provide explanations.
Easy Knowledge of Our Own Intentions
Catherine Rioux, University of Toronto
Sarah Paul holds that a decision to F, owing to its status as a conscious mental act, can provide one with prima facie, immediate justification for the belief that one intends to F. I argue that Paul’s model licenses presumably unacceptable inferences to the reliability of decisions as guides to our intentions, i.e., inferences to the conclusion that we are not in a psychological situation in which we have decided to F but do not intend to F. I then examine two different ways of explaining why such inferential reasoning does not constitute an undue epistemic ascent to the reliability of decisions. The first focuses on the prima facie character of the justification provided by decisions, whereas the second is an attempt to distinguish the source of our justification for introspective beliefs about our intentions from the source of our justification for the negation of skeptical hypotheses.
Developmentalism and Practical Knowledge
Devlin Russell, University of Toronto
In this paper, I argue for a novel theory of intentional action by arguing that it helps explain why intentional action and self-knowledge of action go together. According to this theory, actions develop and intentional action is a certain way for an action to develop. Just as a cherry develops through the power of a cherry tree to make a bud and make a blossom, an intentional action develops through the power of an agent to make a desire and make an intention—more precisely, to deliberate and reason. This theory gives a better explanation than cognitivism about intention, according to which an intention is a belief, because it does not make forming an intention look irrational. It is better than inferentialism about practical knowledge, according to which knowledge of action is inferred from knowledge of intention, because it does not make knowledge of action contingent. And more generally, it is better than statism, according to which an intention is a state, because it does not naturally lead to the problem of causal deviance.
Religion and Happiness
Paul Saka, University of Texas, Rio Grande
The new millennium has seen explosive growth in scientific studies claiming to find links between religion and happiness. It is widely presumed (and sometimes explicitly argued) that such findings support the conclusion that religion is good, deserving general promotion and adoption. As it turns out, however, the empirical and philosophical literatures at issue are strikingly flawed. In particular I show that the canonical meta-analysis reporting a correlation between religion and happiness ignores response biases, experimenter biases, and publication biases in the primary literature, and it compounds these biases by reading the primary literature selectively. What’s more, even if religion did cause individual happiness, religion would not necessarily be good for us. Investment costs and opportunity costs would need to be considered before any final cost-benefit analysis could be made, and relevant reference classes must be established before rational prescriptions can be drawn.
Non-Identity and Reactive Attitudes
Nicholas Sars, Tulane University of New Orleans
Theorists sometimes appeal to attitudes or emotions when arguing about moral concepts such as wrongness, rights, or obligations. I examine the use of a particular class of attitudes, namely the reactive attitudes, in arguments in which the concept of non-identity plays a central role. At first glance, non-identity might seem to undermine the fittingness of reactive attitudes such as resentment and indignation; however, I argue that in many familiar cases appealing to such attitudes is perfectly intelligible. In cases of preconception decisions and of reparations for past injustices I vindicate intuitions regarding privileged standpoints for resentment. In the case of decisions affecting further future generations, grounds for claiming a privileged standpoint for resenting the past are less clear; however, I show that a generalized reactive attitude like indignation can be fitting.
Sensory Experiences Are Ontologically Opaque
Chris Schriner, Independent Scholar
Abstract: This paper critiques the claim that introspection reveals the ontology of sensory phenomena. If we lack such ontological access, several problems of consciousness become easier to solve. For example, one of the most challenging explanatory gaps between experiential states and brain states disappears if we do not subjectively detect ontologically puzzling phenomena. Similarly, the well-known “Mary” scenario depends on the intuition that color experiences are ontologically remarkable. If that intuition is false, Mary’s new experiences are philosophically unproblematic. The paper offers five arguments supporting the claim that introspection fails to disclose the ultimate nature of sensory experiences. It concludes by considering the plausibility of this skeptical stance.
Knowing a Genetic Donor: Rights and Interests
Olivia Schuman, York University
I am contesting the recent shift in policies world-wide that mandate the removal of gamete donor anonymity. Those in favour of removing anonymity typically assume that a donor-conceived person has a “right to know” her genetic donor. In this paper, I subject the notion of “knowing” a donor to an analysis which draws out the different interests a donor-conceived individual might have in knowing her donor. I do this by outlining five points across a spectrum of knowledge which characterize the relation between the donor and the donor-conceived individual. Ultimately, I conclude that a donor-conceived individual has a right to non-identifying information about the donor, which can be satisfied without removing donor anonymity. However, a right to identifying information (gained through acquaintance with the donor) cannot be similarly justified. Thus, the “right to know” cannot serve as justification for removing donor anonymity.
A (Partial) Possible Worlds Semantics for Reasons
Lucia Schwarz, University of Arizona
In this paper, I develop a novel semantics for the expression “x is a reason for y.” In doing so, I build on a long-standing tradition of interpreting normative expressions in terms of rankings of possible worlds. However, in order to capture the notion of a reason, I think we need to slightly modify the traditional framework and use partial possible worlds instead of ordinary possible worlds. A partial possible world is an aspect or a set of aspects of a world that does not amount to an exhaustive specification of a way the world could be. I propose a semantic analysis of “x is a reason for y” in terms of rankings of partial possible worlds.
Natural Goodness is Good For You: Well-Being and the Rational Authority of Human Nature
Matthew Shea, Saint Louis University
Neo-Aristotelian ethical naturalism holds that moral goodness is a form of natural goodness: moral evaluation is based upon facts about the human species or life form. One of the most important objections to this view is the “authority of nature challenge,” which demands some reason or explanation why we should care about being good human beings. If moral goodness is species-specific natural goodness, then morality can have rational authority for us only if human nature has rational authority for us, which it appears to lack. I defend a response to this challenge that links natural goodness to human well-being, and I argue that by adopting a nature-fulfillment account of well-being, Aristotelianism can show why human nature has the right kind of rational authority. I also discuss some important implications of my response for moral theories that ground moral goodness in human nature, namely eudaimonistic virtue ethics and natural law theory.
Freedom and Responsibility in Kant
Adam Shmidt, Boston University
If freedom is uniquely realized in moral conduct, and freedom is required for moral responsibility, then responsibility for morally impermissible actions seems impossible. The tension between moralistic accounts of freedom and responsibility has often been emphasized by critics of Kant. In this paper, I offer a formulation of the problem as it arises in Kant’s ethics, argue that the dominant response on Kant’s behalf fails, and present an alternative Kantian reply. On my view, judgments of responsibility are justified by moral considerations of respect, and Kantians should be skeptical of the claim that theoretical issues concerning free will must be settled in order to justify judgments of responsibility.
Thomas Reid on All Things Considered Duties to Believe
Christopher Shrock, Oklahoma Christian University
Philosophers often regard early modern philosopher Thomas Reid as the muse of a thesis one might call Proper Functionalism, the beliefs are caused by rightly working mental and bodily processes. Certainly, Reid attends carefully to the role of externals in the acquisition of knowledge, but I argue, he is not a Proper Functionalist about doxastic justification. Rather, Reid holds that rational agents are justified in their use of voluntary, belief-causing faculties whenever they employ those faculties in accordance with their moral duties. The beliefs themselves, being involuntary results of complex and various processes, are, strictly speaking, always justified. In this paper, I present and defend Reid’s account of doxastic justification, responding to earlier misreadings of Reid and answering objections concerning the (in)commensurability of moral and epistemic norms. I couch my presentation of Reid’s view as an extension of and corrective to a recent article by Anthony Booth.
Henry More, Holenmeric Souls, and the Unity of Consciousness Argument
Daniel Simpson, Saint Louis University
If a human being has an immaterial soul, then where is her soul located? According to the medieval scholastic account of immaterial presence, which Henry More calls “holenmerism” (or “whole-in-the-part-ism”), the soul is co-located with the body by being wholly located in the whole body and wholly located in each part. One of the reasons motivating holenmerism is a sort of "unity of consciousness" argument. But More thinks the holenmeric explanation of the unity of consciousness is contradictory and unsupported by the best scientific theories of his day. Although the secondary literature has examined many of More’s criticisms of holenmerism, these particular criticisms have not been examined, let alone mentioned, in the secondary literature. In this paper I examine More’s particular criticisms of this argument, and look at the prospects and perils that this argument might offer for contemporary discussions in human ontology and the philosophy of mind.
Sui Generis Linguistic Norms
Robert Siscoe, University of Arizona
Contemporary philosophy of language contains much discussion of linguistic norms, the norm of assertion and the Gricean maxims, and has primarily been centered around identifying which norms there are. There has been little work, however, on what grounds such norms. In this paper I explore how linguistic norms might be related to other types of normativity, morals norms and norms of etiquette, arguing that linguistic norms come apart from both normative domains in important ways. I argue that linguistic norms should be understood as a sui generis form of normativity, one that has its roots in what constitutes a practice of effective communication.
Jeremy Skrzypek, Saint Louis University
Of those who defend a Thomistic hylomorphic account of human persons, “survivalists” hold that the persistence of the human person’s rational soul between death and the resurrection is sufficient to maintain the persistence of the human person herself throughout that interim. (“Corruptionists” deny this.) According to survivalists, at death, and until the resurrection, a human person comes to be temporarily composed of, but not identical to, her rational soul. One of the major objections to survivalism is that it is committed to a rejection of a widely accepted mereological principle called the weak-supplementation principle, according to which any composite whole must, at every moment of its existence, possess more than one proper part. In this paper, I argue that by recognizing the existence of certain other metaphysical parts of a human person beyond her prime matter and her rational soul, hylomorphists can adhere to survivalism without violating the weak-supplementation principle.
Water, Games, and Causes: A Decision Procedure for Essence Hunters
Robert Smithson, Duke University
We think that some things have a unified nature for science or philosophy to discover. For example, we think science has revealed the nature of water to be H2O. We think that other things do not have a unified nature. For example, there doesn’t seem to be any informative analysis of gamehood. What about causation, or dispositions, or free will? For any philosophically-interesting item X, one can find a host of competing theories regarding X’s nature. In offering such analyses, philosophers assume that X is relevantly similar to water, not gamehood. In this paper, I show how this assumption might be challenged. I identify two features of the term “water” that distinguish it from the term “game.” By determining whether a philosophical term “X” shares these features, we have a method for determining whether X can be given an informative analysis. As a test case, I consider the term “cause.”
Plasticity, not Pathology: Merleau-Ponty and the Case of “Schn.”
Bryan Smyth, University of Mississippi
This paper reconsiders Merleau-Ponty’s well-known use of the case of “Schn.” (Schneider) by refuting the long-standing (but hitherto unrefuted) objection of methodological equivocation. I first lay out the epistemic framework of Kurt Goldstein’s holistic conception of biology, and then show that the methodological commitments informing Merleau-Ponty’s use of “pathological” cases derive directly from that conception. On this basis I argue that Merleau-Ponty does not construe Schneider as a “pathological” case simpliciter, but that his approach is premised on a generative concept of organismicity that serves to reveal a general (normal) plasticity within individuals’ intentional relatedness to the world. By way of conclusion, I elaborate on some implications of this analysis by considering its unexpected incongruity with the idea of “bodily” or “motor” intentionality.
In Defense of Subjectivism about Moral Obligation
Jonathan Spelman, University of Colorado Boulder
According to objectivism about moral obligation, an agent’s moral obligations are independent of her beliefs or her evidence. Although the leading theories in normative ethics have traditionally been formulated as versions of objectivism, objectivism has recently come under fire from prospectivists (e.g., Elinor Mason and Michael J. Zimmerman) who believe that an agent’s moral obligations depend on her evidence. As it turns out, however, the reasons for moving from objectivism to prospectivism also speak in favor of moving from prospectivism to subjectivism, the view that an agent’s moral obligations depend on her beliefs. Moreover, none of the most common objections to subjectivism are successful. Thus, subjectivism deserves more attention than it has received.
Practical Perceptual Representation
Alison Springle, University of Pittsburgh
Tyler Burge (2010) defends the view that perceptual states are genuinely intentional. I agree with him. However, according to Burge, for a state to be genuinely intentional, it must abide by descriptive norms (truth, veridicality, or accuracy). In his criticism of teleo-semantics, which grounds original intentionality in natural relations (e.g., causal covariation) and biological functions, Burge distinguishes between descriptive success, which consists in attaining truth or accuracy, and practical success, which need not involve truth or accuracy. Because biological functions are essentially practical, Burge argues, they cannot ground genuinely intentional states. Here, I don’t agree with him. Even if biological functions aim exclusively at practical success, they can still ground genuinely intentional states. I propose a set of conditions for assessing whether a state counts as genuinely intentional, and I argue that “instructional perceptual content”—a novel, fundamentally practical (non-descriptive) construal of perceptual content—satisfies these conditions.
A Category Mistake in Philosophy of Consciousness and Naturalistic Dualism
Marco Stango, Pennsylvania State University
The paper shows how the so-called problem of consciousness dissolves when we acknowledge that its formulation relies on a category mistake. Similarly to Kant’s and Frege’s treatment of existence, the theory sketched in this paper clarifies consciousness as the mere eventuation of neurophysiological properties. In this view, treating the phenomenal properties of a human being’s conscious states as real determinations of that being, additional to her neurophysiological properties, is a category mistake. This amounts to a purely reductionist view of consciousness. The paper also discusses David J. Chalmers’s “naturalistic dualism” to test its proposal.
Supersententialism and the Problem of the Many Sentences
Rohan Sud, Bates College
I argue that a new version of supervaluationism, which I call supersententialism, is the most plausible version of supervaluationism. According to supersententialism, instead of speaking a single language with multiple precisifications, we are simultaneously speaking many precise languages. Standard supervaluationism conflicts with the disquotational feature of the truth predicate. I will show that supersententialism offers a fully reductive account of indeterminacy that satisfies the disquotational feature of truth. The strategy underlying my defense is to treat the T-schema as an instance of the Problem of the Many, applying tools originally developed in response to the Problem to the present debate.
Desert as a Limiting Condition
Steven Sverdlik, Southern Methodist University
Some retributivists now assert that desert claims operate as a “limiting condition.” If a person deserves a certain amount of punishment, then her desert operates as an upper limit on how severely she may be punished. Such a limit permits the pursuit of deterrence so long as punishments do not breach it. I survey the considerations that retributivists assert affect what an offender deserves for an act of wrongdoing. Two important kinds of consideration either certainly are not relevant to an offender’s desert, or may not be. These are an offender’s risk of reoffending and her probability of being punished. It might be said that desert limits are high enough so that these two considerations can play a significant role in setting severity levels. I argue that a plausible version of the lex talionis shows that this claim is false. Desert sometimes sets an implausibly low limit on punishment severity.
Creation as Efficient Causation in Aquinas
Julie Swanstrom, Armstrong Atlantic State University
In this paper, I explore Aquinas’s account of divine creative activities as a type of efficient causation. I propose that Aquinas’s works hold a framework for understanding God as an efficient cause and creating as an act of divine efficient causation without abandoning the basics of Aristotle’s account of efficient causation. To that end, my paper includes a demonstration that Aquinas does not simply describe creation as efficient causation but that divine creating entails the components of Aristotelian efficient causation. I show how this efficient causation without a patient—creation ex nihilo—fits within he general structure of what Aquinas takes to be efficient causation as articulated by Aristotle. For Aquinas, Aristotelian efficient causation is not incompatible with God creating ex nihilo. I show that Aquinas seems to think that his understanding of creating as efficient causation is fundamentally Aristotelian.
A Solution to the Problem of Moral Luck
Philip Swenson, Rutgers University
Moral luck is, roughly, the varying of one's degree of praise or blameworthiness due to factors beyond one's control. Many of us have an inkling that there is something wrong with moral luck. However, rejecting moral luck appears to require extreme revision of our everyday judgments about who deserves praise and blame. Thus many still accept the existence of moral luck. I aim to develop a view which (i) captures our anti-moral luck intuitions but (ii) requires much less revision of our everyday moral judgments than do previous attempts to reject moral luck. I claim that every possible situation in which one could choose has (so far as praise and blameworthiness are concerned) the same expected value. This allows us to account for our anti-moral luck intuitions while preserving much of commonsense morality.
Russellian Monism Without Panpsychism
Henry Taylor, University of Cambridge
Recently, there has been a surge of interest in versions of Russellian monism that embrace panpsychism: the view that consciousness is ubiquitous at the micro-level. In this paper, I first outline Russellian monism, then examine the two most popular arguments in favour of panpsychist versions of the doctrine. I argue that they are unsuccessful. This suggests that, though there may be good reason to be a Russellian monist, panpsychist versions of the view are unmotivated.
Wild Chimeras: Kant on the Dangers of Enthusiasm
Krista Thomason, Swarthmore College
Enthusiasm (Schwärmerei) makes many appearances in Kant’s work, but what role it plays in Kant’s thinking is unclear. Sometimes he seems to classify it as a mental illness (2:267). In other works, Kant’s assessments are harsher: in the Religion, Kant goes so far as to call enthusiasm “the moral death of reason” (6:175). Given the variety of Kant’s remarks, what precisely is the problem with enthusiasm? There is a common feature that enthusiasts share: they become too enamored with their own ways of thinking, and thus refuse to subject reason to self-critique and become impervious to the reason of others. The particular danger of enthusiasm is that reason colludes in its own destruction: enthusiasm occurs when self-conceit and reason’s desire to transcend its boundaries mutually reinforce each other. On Kant’s view, enthusiasm is neither mysticism nor madness, but rather an intellectual vice.
Accurate Distortion: Perception, Truth, and Utility
Rebecca Traynor, The Graduate Center, CUNY
I examine perception’s function and truth conditions using theories and data from Philosophy, Psychology, and Neuroscience. Specifically, I enter the philosophical debate about perception’s being penetrable from the top-down by higher-level cognitive belief and desire states. I rebut objections offered by impenetrability theorists and present a wealth of data favoring penetrability by non-visual information. Given this, I argue perception is penetrated by and represents the world relative to states of perceivers. Indeed, I claim perception is accurate not when it captures objective facts about the world as it is in itself, but, rather, when it represents a special set of further facts about the relationship between perceivers and their environments in order to promote possibilities for safe, effective, and efficient action. On my approach, then, perception promotes possibilities for action by making them visually apparent; it “accurately distorts” objective facts about the external world because its function is “seeing-in for-action.”
Cassirer’s Philosophy of Symbolic Forms As Antidote to the “Post-Truth Era”
Simon Truwant, Katholieke Universiteit Leuven
My paper argues that the ultimate motivation propelling Ernst Cassirer’s philosophy of symbolic forms is to regain “a general orientation to which all cultural differences might be referred.” This view of Cassirer’s thought discards some of its persistent criticisms and presents it as a possible antidote to the “post-truth era.” Although Cassirer celebrates the substantial diversity of cultural perspectives (section 1), he nevertheless emphasizes the unity of human culture by grounding this diversity in a “functional conception of human consciousness” (section 2). For Cassirer, then, only critical philosophy can relate these two poles of human culture and offer (normative) guidance in a pluralized world (section 3).
Mill on Ideological Conversion and Social Reform: An Interpretation of Mill’s Argumentative Strategy in The Subjection of Women
Van Tu, University of Michigan
The author of The Subjection of Women appears to many scholars to be a philosopher who is infused with zeal for gender equality but ultimately confused. A problem that contributes to this perception is that Mill appears to endorse two contrasting variations of feminism in The Subjection of Women—difference feminism and liberal feminism. I propose an interpretation of Mill’s argumentative strategy that can make sense of this apparent tension by drawing insights from his writing in the essay “Coleridge.” Mill thinks that successful social changes must be made gradually and be built upon existing civil society and that one can only hope to convert a conservative opponent to liberalism by leading the opponent to adopt one liberal opinion at a time, as a part of Conservatism itself. This interpretation can provide a greater understanding of The Subjection of Women and a more charitable assessment of its author.
Is a Deaf Future an Open Future?
Paul Tubig, University of Washington
Pre-implantation genetic diagnosis and in-vitro fertilization allow prospective parents to choose which embryo to implant on the basis of its genetic traits. One prominent argument is that deliberately selecting a deaf embryo is morally wrong because it violates the child’s right to an open future (CROF). Deafness is understood as a genetic condition that leads to substantial constraints in opportunities. In this paper, I argue that CROF does not adequately support the conclusion that selecting for deafness is morally wrong. First, the right does not specify the minimum range of opportunities that would characterize an open future. Second, the right is unspecific as to what counts as a diverse range of opportunities. Third, the right overlooks relevant social factors that might be responsible for a group’s limited range of accessible opportunities. Thus, the simple appeal of CROF does not adequately defend the moral wrongness of deliberately selecting a deaf embryo.
Dogmatism and the Epistemology of Covert Selection
Chris Tucker, College of William and Mary
Perceptual dogmatism is, roughly, the thesis that perceptual experiences always prima facie justify believing their contents. Perceptual dogmatism faces many problems, including those posed by the (metaphysical) possibility of cognitive penetration. Recently, the literature on the metaphysics and epistemology of covert attention has ballooned, where attention is covert to the extent that it is not due to the position and orientation of your body and sensory organs. The relationship between covert attention and cognitive penetration is controversial, so one might wonder: does the possibility of biased covert attention raise any new challenge for dogmatism, a challenge that dogmatism does not already face? I argue that the answer is no. This is good news for dogmatism: the fewer distinct challenges for dogmatism, the more likely it can answer them all.
Aristotle on Discriminating the Common Sensibles
Rosemary Twomey, Simon Fraser University
I defend a deflationary reading of Aristotle’s distinction between in-itself and coincidental perception. Unlike rival interpretations, I argue that in-itself perception is not grounded in an efficient causal relation between the sense and the object, but rather in a teleological connection: sight is for color, not for the son of Diares; so we see color in itself, but the son of Diares coincidentally. On this reading, to say that the son of Diares is seen coincidentally is not, then, to say he is not really or directly seen, but rather that we will not come to understand seeing by attending to such cases. This interpretation fits well with Aristotle’s general methodological principle of analyzing a faculty in terms of its object, but in this paper I also show that it is uniquely able to explain Aristotle’s otherwise puzzling remarks in DA III.1 about the purpose of the so-called “common-sense.”
What Is a Relational Virtue?
Sungwoo Um, Duke University
The aim of this paper is to introduce the concept of relational virtue and argue that it is a meaningful subcategory of virtue that has an ethical significance. In particular, I argue that it offers a valuable resource for answering the questions concerning the value of personal relationships. After briefly sketching what I mean by relational virtue, I show why it is a virtue and in what sense we can meaningfully distinguish it from other sorts of virtues. I then describe some distinctive features of relational virtue in more detail and discuss their implications.
Aristotle on Ontological Priority
Hikmet Unlu, University of Georgia
In several passages Aristotle explains ontological priority in terms of ontological dependence, but in others he seems to adopt a teleological conception of priority. It is not clear why Aristotle would give us two different accounts of ontological priority, but commentators have offered various suggestions. Many hold that despite appearances to the contrary, there is only one kind of ontological priority in Aristotle’s work. The main goal of this paper is to show otherwise; I argue not only that there are two priorities at issue but also that Aristotle himself distinguishes between them. I also briefly discuss Ross’s suggestion that the two senses of ontological priority can be traced back to the two senses of substance; I argue that Ross’s interpretation is superior to its alternatives but that it also faces problems of its own.
The Phenomenal I and the Phenomenological Contrast between Affection and Volition
Stefano Vincini, Universidad Nacional Autunoma de Mexico
This paper is an exercise in phenomenology that makes use of resources in Husserl and Ricoeur. These authors converge on the idea that the notion of a subjective pole of intentional experience is necessary to account for the differences between volition and affective phenomena. I argue that, while affection “draws” or “pushes” the “I” toward action possibilities (centripetal intentionality), volition entails a centrifugal form of intentionality in that the “I” adheres to affective inclinations. I exhibit the validity of this claim at three levels of volition: deliberate decision, habitual willing, and instinctive action.
An Epistemic Argument for Liberalism about Perceptual Content
Preston Werner, Hebrew University
This paper concerns the question of which properties figure in the contents of perceptual experience. According to conservatives, only low-level properties figure in the contents of perceptual experience. Liberals, on the other hand, claim that high-level properties, such as natural kind properties, artifacts, and even moral properties, can figure in the contents of perceptual experience. I defend a novel argument in favor of liberalism, the Epistemic Argument, which hinges on two crucial claims. The first is that many perceptual experiences of even neurotypical human beings can justify beliefs in high-level properties without providing justification for their low-level constituents. The second claim, roughly, is that any experience that alone provides (defeasible) justification for beliefs about some property p, other things being equal, has p as part of its content. In short, certain perceptual experiences represent high-level but not low-level properties, which entails that liberalism is true.
Character, Mindreading, and the Action-Prediction Hierarchy
Evan Westra, University of Maryland
My goal in this paper is to integrate our understanding of character-trait attribution with other aspects of theory of mind. I will propose that we use representations of a person’s stable character traits to inform our hypotheses about her more transient mental states—namely, her beliefs, goals, and intentions—which we in turn use to predict their behavior. Trait attribution thus forms the upper level of an action-prediction hierarchy, wherein the hypotheses at higher levels inform the hypotheses at lower levels. Feedback from observable behavior then leads us to make revisions to our mentalistic hypotheses, which might occur at either the belief-desire levels or at the level of character traits. This basic inferential structure is best understood in terms of a hierarchical, Bayesian model of cognition, or Bayesian predictive coding (BPC).
Subgroup Not Subset: On Comparing Spacetime Structure
Isaac Wilhelm, Rutgers University
The SUBSET principle provides a method for comparing the structures of any two spacetimes. According to SUBSET, if spacetime S has more structure than spacetime T, then the automorphism group of S is a proper subset of the automorphism group of T. But SUBSET seems to yield the wrong results in a number of cases. The problem, I argue, is that SUBSET fails to compare automorphism groups properly. The proper method for comparing automorphism groups relies upon the subgroup relation, not the subset relation.
Evidence and Accuracy in Transformative Decisions
Daniel Wilkenfeld, University of Pittsburgh
Jennifer R. Gleason, Ohio State University
Across three experiments, we find that people are willing to explain symptoms by appeal to mental disorder classifications, but that their willingness to do so depends in large part on causal assumptions that are not endorsed by the DSM, and indeed denied in some of our vignettes. This pattern of results was largely, but not exclusively, driven by participants’ assumptions about the presence of a common cause: even when the characterization of the disorder denied a common cause, participants in the symptom condition often inferred that one existed. Finally, Experiment 3 found that a non-arbitrary but non-causal basis for grouping symptoms was insufficient to improve the explanatory potential of a symptom cluster.We argue that the question of what one should believe one ought to do is not as simple as it might at first appear. Specifically, we argue that, when facing the question of what one should believe one ought do when faced with a transformative decision (which we will characterize) different norms of rational belief formation could under some circumstances lead to different pronouncements. Even if one of the choices of what one ought do is better than the other from a third-personal perspective, a quest for forming accurate beliefs might not favor believing that one ought make the better choice. By contrast, a respect for evidence should make one favor the better option over the less optimal choice. We take this to provide some reason to favor respect for evidence over the pursuit of accuracy.
The Necessities "In Here": Detection and Projection in Hume's Account of Causal Necessity
Aaron Wilson, South Texas College
Hume’s projectivist account of causal necessity or connection is understood to be motivated by the rejection of the “detection account” of causal necessity (e.g., Kail 2007). On the detection account, we acquire the idea of causal necessity by becoming acquainted with some genuine instance of it. However, the projection of necessity onto external objects is compatible with the detection of a genuine connection in the mind. Though Hume is widely read as also rejecting the detection of genuine connections in the mind, I argue that he only denies that we can detect genuine mental connections in acts of will or volition. Though Hume argues that perceiving a connection would give us a priori knowledge of the effect, this argument does not apply to the connection embodied by the determination of the mind to pass from one object to its usual attendant—which is what we project onto external objects.
Deontic Puzzles and Semantics for Ought-Statements
Yuna Won, Cornell University
Ordering semantics is the orthodox semantics for modals and conditionals today. In this paper, I discuss two well-known deontic puzzles and argue that the ordering semantic analysis of ought-statements expressing our duties and obligations is not suitable to our normative discourse and reasoning. Proposed ordering semantic solutions to the two deontic puzzles, Sartre’s Dilemma and Chisholm’s Paradox, have unpalatable results and invite even more puzzles. I diagnose that the ordering semantic solutions are unsatisfactory because the ordering semantic account fails to recognize that we do express two different types of normative judgments with ought-statements in our normative discourse. I distinguish them by introducing axiological and deontological uses of ought-statements into a semantic framework. The ordering semantic analysis does not leave room for deontological ‘ought’s, and I call this the Axiological Reduction.
Nathan Wood, University of Georgia
As a useful category of ethical theories, deontology has recently fallen out of favor with a majority of moral philosophers. The variety of interpretations have failed to adequately capture the claim or idea that all deontological theories share in the way that consequentialism as a moral category has a clear criterion that all consequentialist theories share. The waning influence of deontology has emboldened a new kind of consequentialist thesis claiming that all moral theories can be “consequentialized,” meaning that any normative claim made by a theory can be translated into a consequentialist claim supposedly without anything of theoretical or moral value lost in the process of translation. Any rejection of the “consequentializing strategy” implies the existence of a type of theory whose normative prescriptions cannot be translated into consequentialist terms. The best candidate is deontology and this essay develops an interpretation that challenges the “consequentializing strategy.”
The Duty of Veracity and Possible Universal Consent
Ava Wright, University of Georgia
In a late essay, “On a Supposed Right to Lie Out of Philanthropic Concerns” (SR), Immanuel Kant asserts a formal juridical duty of truthfulness in “unavoidable” social testimony. The nature and scope of the juridical duty of truthfulness in SR, however, is not clear, and Kant's rationale for the duty is not well-developed in this short essay. My aim in this paper, therefore, is to determine whether Kant's theory of justice can support a juridical duty of veracity, and if it can, then to determine what sort of duty veracity is. I argue that an enforceable duty of veracity is a necessary condition for the possibility of universal consent to law, which is Kant's standard for law's legitimacy. This rationale scopes the duty of veracity specifically to testimony about social warranted, expert knowledge, I argue.
Parity, Incomparability, and Categorical Judgments
Leo Yan, Brown University
In “Parity, Comparability, and Choice”, Chrisoula Andreou presents a novel analysis of parity as a fourth comparative relationship distinct from betterness, worseness, and equality. Her analysis is meant to help Ruth Chang’s account of seeming incomparability in meeting a three-fold challenge. This is to first explain what the relationship of parity is supposed to be, then show how Chang’s account is distinct from other accounts of seeming incomparability, and finally demonstrate how Chang’s account has an advantage over these other accounts. I here argue that while Andreou’s analysis meets the first part of the challenge, it fails to meet the second and third parts. While Andreou gives a plausible analysis of what parity could be, it ultimately fails to help distinguish Chang’s account from its main rival account in any way that would afford it an advantage. Given this, Andreou’s analysis of parity is not one that Chang should adopt.
Animalism and Remnant-Persons
Eric Yang, Santa Clara University
The remnant-person problem purports to show that animalism—which is the view that we are (identical to) animals—yields absurd results. In this paper, I argue that the remnant-person problem rests on some problematic assumptions that are neither clear nor well-motivated. Moreover, I show that there are some considerations independent of animalism that imply the falsity of these claims, and hence such claims should not be regarded as obvious or self-evident. The upshot of this response is that it is available to every version of animalism.
Vida Yao, Rice University
I present a conception of the attitude of grace, and argue for its desirability within certain interpersonal contexts. These contexts are those in which a person needs a kind of love that is fully attentive and that also attaches to more than the good qualities of their character. I propose understanding grace as a love for the qualities of human nature, showing why this makes it different from love that attaches to a person because of his bare humanity or personhood. Grace, I will argue, may be uniquely suited to respond to the difficulties posed in the kind of context I will describe, as other forms of love that have been discussed in the philosophical literature (including love of bare personhood itself) may only further alienate and isolate the beloved, rather than strengthen one’s bond with him.
Contingent Labor in the Academy: Precarity, Power, and Ideology
Robin Zheng, Yale-NUS College
I identify two sets of (gendered and racialized) myths and attitudes common amongst academics that contribute to precarity. The first is the myth of meritocracy, according to which outcomes on the academic job market are determined by merit. The second is a “do what you love” ideology according to which academic work is intrinsically desirable and hence different to other work. I connect the problem of precarity to issues of bias and discrimination which have hitherto received much greater attention within the profession, and I argue that academics should take responsibility for the problem of precarious labor in academia.
Essence and Grounding Connections
Justin Zylstra, University of Alberta
I extend the truthmaker semantics to essence and provide a semantic argument for the view that no grounding connection obtains between an essentialist truth and its prejacent.