Print Page | Contact Us | Sign In | Join the APA
2018 Pacific Division Abstracts
Share |

APA Pacific 2018 San Diego California

Postmodern Jazz and the Ontology of Improvisation: Should Jon Benjamin Really Have Learned to Play the Piano?

Jessica Adkins, Saint Louis University

Jon Benjamin is well known for his comedy and voice acting (Bob’s Burgers, Archer). In 2015, Benjamin took on a new venture and released his first musical album—a jazz album titled Well I Should Have*...learned how to play the piano. Benjamin, who has no clue how to play piano, even refers to himself as a “jazz daredevil.” My task is to evaluate this album and determine if it should be regarded as improvisational and if it is truly a work of jazz since the creator knows little about jazz music and lacks the technical skills required to play the piano. I look to literature on jazz ontology, postmodernism, and experimental music to ultimately defend the album as being both improvisation and jazz. Features highlighted of the album include its extreme risk taking and pushing of boundaries of the jazz discipline.

Forgiving and Forgetting

Craig Agule, Rutgers University–Camden

To fully understand forgiveness, we must grabble with the relationship between forgiving and forgetting. Because blame and forgiveness are both centrally matters of attention, it should not be surprising to discover that forgiveness will dispose us to forget. Seeing this is central to understanding forgiveness, and it also has consequences for whether, when, and how we should forgive.

Agency, Dissimulation, and Social Perception

Rami Ali, Lebanese American University

Recently, Parrott (2017) has argued that social perception, the perception of others’ mental states, is not possible in the same way the perception of objects, events, and their properties is. While perception gives rise to perceptual knowledge, social perception does not. This is because of “an ontological gap between a person’s mental state and the overall look the person displays, which arises because of the person’s agency” (p. 16). While this view is tempting, I will argue that there is a simpler proposal that avoids the gap in Parrott’s picture. By focusing on cases of dissimulation, which involve concealing or feigning a perceptual appearance, and noting the similarity between dissimulation and coloration in nature (e.g., visual camouflage), we can see how others’ agency does not compromise our perception of others’ states. Moreover, the resultant picture seems to better capture what we think we can know by watching others.

Temporalized Constitutivism and Free Agency

Roman Altshuler, Kutztown University of Pennsylvania

Perhaps the deepest problem for free will rests on arguments designed to show that free agency is impossible given the role of our motivational past in necessitating our actions. I argue that we can respond to this challenge by adopting constitutivism: the view that agents, qua agents, are bound by constitutive aims of agency. If constitutivism is true, then all agents, qua agents, have access to the same set of norms. Agents are thus free and responsible relative to universal agential norms insofar as those norms are necessarily constitutive of their agency. Furthermore, I argue that this ability to conform to norms with regard to one’s future actions gives agents some control over the motives in light of which they choose their actions, thus ensuring that they not only have access to correct norms, but also the ability to be motivated by them.

Towards a Three-Factor Approach to Monothematic Delusion

Thomas Ames, University of Missouri–St. Louis

The two-factor approach proposed by Coltheart and colleagues is a novel attempt to understand the causes of delusion. The approach involves asking two questions: (1) “What brought the delusional idea to mind in the first place?” and (2) “Why is this idea accepted as true and adopted as a belief when it is typically bizarre, and when so much evidence against its plausibility is available to the patient?” (Coltheart et al. 2011b). In what follows, I’ll agree that these are the right questions to ask, but disagree that the answer to the second question is as simple as Coltheart and his colleagues have supposed. I argue that the adoption of delusional thoughts as beliefs involves two separate processes: not just the evaluation of beliefs, but also the patient’s subsequent metacognitive judgment of that evaluation. This three-factor approach leads to a clearer understanding of the genesis and maintenance of delusional thoughts.

Linguistic Hijacking

Derek Anderson, Boston University

This paper introduces the notion of linguistic hijacking, a form of epistemic violence in which members of dominant groups misuse politically significant terminology in ways that harm marginalized groups. I focus on hijackings of the terms ‘racist’ and ‘racism.” I argue linguistic hijacking is best understood as a form of epistemic oppression of the kind Dotson (2014) calls a third-order epistemic exclusion. When a dominant agent hijacks the word ‘racist” by claiming, ‘affirmative action is racist,” they are problematically influencing the epistemological system that determines which uses of ‘racist” count as intuitive or commonsensical. This contrasts with a different explanation of the harm of linguistic hijacking according to which hijackings corrupt the semantic functions of politically significant terms. I argue this alternative explanation fails to accurately capture the harms of linguistic hijacking. One upshot is that misuses of the term ‘racism” still refer to racism and cannot change its meaning.

Non-Locality in Intrinsic Topologically Ordered Systems

Jonathan Bain, New York University

Intrinsic topologically ordered (ITO) condensed matter systems are claimed to exhibit two types of non-locality. The first is associated with topological properties and the second with a particular type of quantum entanglement. These characteristics are supposed to allow ITO systems to encode information in the form of quantum entangled states in a topologically non-local way that protects it against local errors. This essay first clarifies the sense in which these two notions of non-locality are distinct, and then considers the extent to which they are exhibited by ITO systems. I will suggest that while the claim that ITO systems exhibit topological non-locality is unproblematic, the claim that they also exhibit quantum entanglement non-locality is less clear, and this is due to ambiguities in the notion of quantum entanglement, and to an unspoken assumption that explanations of topological phenomena must be parsed in terms of mechanisms.

Non-Causalism as Metaphysical Dependence

Jordan Baker, University of Tennessee

Non-causal theories of action are not widely held among contemporary philosophers. This is, I argue, because contemporary non-causal accounts have a weakness that undermines their plausibility. These accounts rely on “experiential premises” in their arguments, which leaves them vulnerable to critiques concerning an explanatory gap between our experiences and the causally ordered natural world. A more plausible non-causal theory of action can be constructed, I contend, by relying instead on essential grounding relations to explain how actions can (i) be uncaused, (ii) belong to the agent, and (iii) fit with the causally ordered natural world. I first note how three contemporary non-causalists are vulnerable to the above critique. Second, I argue that non-causal accounts that rely on essential grounding both avoid those critiques and suggest an independently plausible theory of action. Third, I conclude by briefly canvasing questions a complete essential grounding theory should address.

Belief Dependence: How Do the Numbers Count?

Zach Barnett, Brown University

In some sense, it is clear that the numbers count. That is, it is clear that the number of thinkers on a given side of a disputed issue is typically relevant to the degree of support their opinions provide. It is natural to think that numbers cannot be all that matter, though, for the extent to which the opinions are independent also seems to have substantial epistemic import. Jennifer Lackey (2013) calls this natural idea into question, suggesting that there is no good way to capture the type of independence that can play this epistemic role. This paper investigates the issue, responding to the concerns raised by Lackey. An expectational account of belief dependence and independence is developed—one that can be applied whether we think in terms of credences or in terms of all-or-nothing beliefs.

An Epistemic Assessment of the Modal Ontological Argument

Brian Barnett, University of Rochester

Given an Anselmian conception of God, plausible principles of modal logic allow the valid derivation of theism from a seemingly weak claim that even many atheists find intuitive: that it is at least possible that God exists. Critics of this reasoning have noticed that a parallel atheistic argument can be constructed. Many theists, atheists, and agnostics alike have come to think that neither argument is more rationally acceptable than the other, yielding an epistemic stalemate over God’s existence. My aim in this paper is to resist this assessment. Specifically, I argue that modal intuitions favor the atheistic line of reasoning.

Is Visual Masking Evidence for Unconscious Perception?

Jacob Berger, Idaho State University, and Myrto Mylopoulos, The Graduate Center, CUNY

It is widely assumed in cognitive science that visual masking provides excellent evidence for unconscious perception. In a typical study, stimuli that would otherwise be consciously seen are presented and masked—that is, either preceded or followed by different stimuli—in ways that ostensibly render the targets invisible to visual consciousness. Recently, however, some have expressed skepticism about such evidence. We take the most powerful critique of this kind to be a series of experiments by Megan Peters and Hakwan Lau, which purportedly shows that many cases of visual masking may involve stimuli that are weakly, though consciously, perceived. This work, moreover, dovetails into a theoretical challenge to masking research pursued by Ian Phillips, according to which much masking may not investigate genuine perception at all. We argue here that these related concerns about visual masking are unfounded and that the technique arguably does constitute strong evidence for unconscious perception.

Aristotelian Eudaimonism Is Not Motivationally Egoistic or Self-Effacing

Adam Blincoe, Longwood University

Eudaimonists are commonly thought to identify ethical behavior as that which advances (or characteristically advances) the flourishing of the agent herself. Critics claim that this sort of justification leads to an egoistic motivation for the virtuous agent. If eudaimonists respond by claiming other, apparently more virtuous motives for the virtuous agent, a charge of self-effacingness is leveled. These charges have been advanced recently by Thomas Hurka. In this essay I argue, contra Hurka, that Aristotelian eudaimonism can avoid both egoistic motivation and self-effacingness. To do this I introduce two novel contributions to eudaimonism: the mixed view of virtue justification (which justifies virtues via the ethical demands of the world and the flourishing of the agent) and the enmeshment account of human flourishing (which contends that the flourishing of an agent does not merely coincide with others, but it is partially constituted by the flourishing of others).

Salmon, Schiffer, and Frege's Constraint

Paolo Bonardi, Metropolitan University and Université de Genève

Since 1987 a debate has been going on between Salmon and Schiffer regarding a puzzle, devised by Schiffer, about Salmon’s Russellian account of belief reports. My goal in the present work is twofold: to show that Salmon’s argument for rebutting Schiffer’s puzzle is not entirely convincing; and to raise a new puzzle, which achieves the same results as Schiffer’s without incurring Salmon’s objection.

Strengthening Principles and Counterfactual Semantics

David Boylan and Ginger Schutheis, Massachusetts Institute of Technology

The Strict Conditional account of counterfactuals is committed to a family of strengthening principles. We focus on Antecedent Strengthening with a Possibility, and present a counterexample to it. This kind of example tilts the empirical balance in favor of a variably strict conditional account which uses an ordering that is not almost-connected. We show how such an account is naturally generated by a Kratzerian variably strict semantics.

Pain, Multimodality, and Bodily Awareness

Adam Bradley, University of California, Berkeley

In this essay I clarify the role that the experience of bodily pain plays in our multimodal awareness of our own bodies ‘from the inside,” or our bodily awareness. I first argue that the notion of bodily awareness is essential for characterizing what is distinctive about our experience of our bodies and that it is a constitutively multimodal phenomenon. I then argue that the experience of bodily pain is a full-fledged component of our bodily awareness. To this end I distinguish between the Derivative Model, on which pains and other bodily sensations are a kind of experiential add-on to our bodily awareness, and the Constitutive Model, on which the experience of pain is eo ipso a way of being conscious of one's own body. To settle the question, I cite two lines of clinical evidence which favor the Constitutive Model over the Derivative model.

Rebuilding after Disaster: Inequality and the Political Importance of Place

Elizabeth Brake, Arizona State University

State disaster policy can be thought of as an insurance policy. The challenge is to explain why it should be mandatory, and why disaster recovery should extend to assistance in rebuilding whenever relocation is less costly. Requiring taxpayers to pay the bill for response, let alone rebuilding in a disaster-prone area, could be construed as a mandatory insurance scheme. Why should disaster insurance be mandatory and public? First, I note the pervasive link between socioeconomic inequality and disaster vulnerability. I then argue that disaster policy should be partly guided by an egalitarian principle of distributive justice, such as equal opportunity. This justifies the state’s assistance in disaster recovery, at least for worse-off citizens, but also implies that better-off citizens have less entitlement to post-disaster aid. Finally, I argue that loss of place community is a cost which should be weighed in disaster policy aimed at securing equal opportunity.

Reflective Equilibrium, Judgments of Coherence, and Judgments of Beauty

Devon Brickhouse-Bryson, Lynchburg College

Reflective equilibrium requires evaluating candidate theories by their coherence. I argue that the judgments of coherence required by the method are analogous to judgments of beauty and that they should be understood as a species of judgments of beauty. I make this argument by reference to Kant’s “antimony of taste.” The antimony of taste states that the distinctive feature of judgments of beauty is that they are unprincipled and yet nevertheless possible. I argue that judgments of coherence share this distinctive feature of judgments of beauty. This argument requires analysis of the nature of principles and seeing why it is plausible that there are no principles of coherence or beauty. The conclusion that judgments of coherence are a species of judgments of beauty entails that reflective equilibrium is a partially aesthetic method of theory evaluation. This means that part of using the method is evaluating candidate theories by their beauty.

Disability Studies, Conceptual Ethics, and Metalinguistic Negotiation

Elizabeth Cantalamessa, University of Miami

Disability studies is an interdisciplinary field aimed at understanding the complex nature of disability in order to improve the lives of people with disabilities. Conceptual ethics involves normative or evaluative conceptual analysis and critique. Many disagreements over conceptual adequacy actually take place implicitly. Metalinguistic negotiation occurs when speakers engaged in a conceptual dispute make competing claims using particular concepts in order to argue for how the relevant terms or concepts ought to be used. First I discuss the emerging field of “conceptual ethics.” I then discuss the role of “metalinguistic negotiation” in conceptual ethics before describing an example of such disputes from mainstream analytic philosophy. In the third section I show how the debates in disability studies over how one should understand the social model of disability fit the metalinguistic negotiation framework. I conclude by suggesting that disability studies is continuous with mainstream analyses in analytic philosophy.

Are Political Minorities in Academia Victims of Epistemic Injustice?

Spencer Case, University of Colorado Boulder, and Hrishikesh Joshi, University of Michigan

Miranda Fricker argues that justice has an epistemic dimension. Testimonial injustice—one of the two kinds of epistemic injustices Fricker identifies—occurs when a speaker’s credibility is deflated, owing to motivated irrationality on part of the hearer. Fricker and others interested in epistemic injustice worry primarily about women and racial and sexual minorities being victims of it. Here we argue that political minorities in academia are also plausibly victims of epistemic injustice. We cite recent empirical research that reveals widespread willingness of liberal academics to discriminate against political conservatives in hiring, publication and symposia invite decisions. Other evidence reveals that this bias is emotionally driven and resistant to counter-evidence in the way that Fricker requires for it to count as a prejudice. If so, then there is good reason to believe epistemic injustice is occurring, and that it represents a threat to university as an institution.

Communicating Essentially Indexical Thoughts

Bryan Chambliss, University of Arizona

Understanding the intentionality of thought requires a theory of content. Good theories of content meet (at least) two desiderata: they assign rich enough contents for thoughts to explain behavior, and lean enough contents for thoughts to be expressed linguistically. In the case of essentially indexical thought, these two desiderata appear to be at odds, creating the Puzzle of Communicating Essentially Indexical Thought. Solutions to the puzzle clarify the role of centered contents, which capture the thinker’s “perspective,” but raise problems for communication. After rejecting solutions that jettison centered contents altogether, I argue that solving the puzzle requires rejecting the Classical Model of communication which drives it, and denying that an adequate theory of content must yield a single content both expressed by the speaker and understood by the addressee. The resulting re-centering account of communication is motivated by considering the communication of second-person essentially indexical thought, or de te thought.

Scepticism, Custom, and Hume: Philosophy’s Place in Common Life

Bowen Chan, University of Toronto

In “Conclusion of this Book,” Hume takes a curious turn to a “total skepticism” (T 1.4.7.7) that leads him into “the deepest darkness” (T 1.4.7.8), without any apparent path of escape. He, however, escapes it, at least temporarily, by returning to “the common affairs of life” (T 1.4.7.9)—putting philosophy aside. Thus it seems that to avoid scepticism is to avoid philosophy. Hume, of course, returns to philosophy, continuing to write Book Two, but he does not despair at all. Donald Ainslie has recently argued that scepticism, for Hume, poses a problem only for philosophers, because philosophy is like a game and “is entirely optional” (2015, 239). I argue, however, that a true philosophy, for Hume, is not a mere game, but plays a useful role in a person’s life, and is recommended for everyone; and conversely, a total scepticism is useless and disagreeable, and is therefore condemned.

The Intrinsic Structure of the Quantum World: Progress in Field's Nominalistic Program

Eddy Keming Chen, Rutgers University

In this paper, I introduce an intrinsic account of the quantum state. This account contains three desirable features that the standard platonistic account lacks: (1) it refers to no abstract mathematical objects such as complex numbers, (2) it is independent of the usual arbitrary conventions in the wave function representation, and (3) it explains why the quantum state has its gauge degrees of freedom. Consequently, this account extends Field's program outlined in Science Without Numbers (1980), responds to Malament's prominent impossibility conjecture (1982), and makes progress towards a genuinely intrinsic and nominalistic account of quantum mechanics. I also discuss how it might bear on the debate about “wave function realism.” Along the way, I axiomatize the quantum phase structure and prove its representation and uniqueness theorems. These formal results could prove fruitful for further investigation into the metaphysics of phase and theoretical structure.

Tough Choices, Reasons, and Practical Reasoning

Ting Cho Lau, University of Notre Dame

The Weighing Reasons Account (WRA) of practical reasoning tells us that when faced with a tough choice, we should consider the reasons that favor each option and choose the option that the balance of reasons favors. I argue that tough choices show that the WRA is incomplete. Section one gives the general structure of tough choices and proposes two desiderata for any complete theory of practical reasoning. Section two introduces the WRA. Section three considers three developments of the WRA and argues that none of them satisfy the desiderata stated in section two. Section four introduces my positive proposal for how to understand and make tough choices. I propose that tough choices are hard because they force us to discover and even create who we are (i.e., our practical identities). The section responds to the objection that my view precludes the possibility of error in tough choices.

Boolos’ Hardest Logic Puzzle Ever in its Purest Form

Juan J. Colomina, University of Texas at Austin, and Pablo Stinga, Iowa State University

This paper takes seriously Boolos’ intentions when presenting his “Hardest Logic Puzzle Ever.” We stress the fact that Boolos instructs us to solve the puzzle with three yes-no questions. In addition, it is a requirement that all the gods, including Random, are always obliged to answer. Moreover, it is implied in Boolos’ solution that the meaning of Da and Ja is irrelevant to solve the puzzle (so, semantics does no determine our ontology), and, at the same time, that all of this would be in virtue of the irreducible fundamentality of “the law of excluded middle.” The purpose of this paper is twofold. First, we prove that Boolos’ original puzzle cannot be solved, in an absolute deterministic way, in less than three yes-no questions. Nevertheless, one can get the gods’ identities in less than three questions, whose chances we compute here as well.

Doubting the Deeds Account of Justification

Sherri Conklin, University of California, Santa Barbara

In this paper, I provide my reader with reason to doubt the theoretical usefulness of the deeds or objective theory of moral justification, which is the most widely held stance on moral justification in ethics. Specifically, I argue that the deeds account of justification [DAJ] is inadequate for our general moral theorizing because we cannot use it to make sense of what justifies morally permissible actions generally. The DAJ is tailored for handling moral justification in cases where an agent performs a pro tanto wrong action in the course of performing a pro tanto right action but is poorly suited to helping moral philosophers explain why morally permissible actions are justified generally. While philosophers are interested in the latter and not the former, both are important to building a unified account of moral justification [UMJ]. I assume that UMJ’s are ceterus paribus more valuable to moral theorists than piece-meal accounts.

On Kagan's Conjecture

Michael Da Silva, University of British Columbia

Shelly Kagan identifies two components of desert, the comparative and the noncomparative, and conjectures that “the demands of comparative desert . . . are perfectly satisfied when noncomparative desert is perfectly satisfied.” While Kagan claims that his conjecture is unimportant for his larger claims, it is used to support at least three of his key claims about the nature of desert. This work demonstrates the importance of Kagan’s conjecture to those claims and Kagan’s larger picture. It then presents three challenges to the conjecture. These challenges suggest that, at best, the conjecture cannot support Kagan’s model of desert. In short, I argue that Kagan’s conjecture (1) threatens to undermine his dualist picture of desert by making the noncomparative reducible to the comparative, (2) cannot be operationalized in the real world or properly modelled in ideal settings, and (3) is inconsistent with plausible views on desert.

Ambiguity, Rule Interpretation, and the Spirit of the Game

Cory Davia, University of California, San Diego

Sometimes, a sport's rules don't settle questions about how that sport should be played. In particular, some things clearly violate the spirit of the game, even if they are within the rules. This idea—that the rules on the books are not all there is to the sports they govern—is an influential one in the philosophy of sports. Many have thought that we need it in order to understand how officials ought to behave in cases where a sport's rules are ambiguous or incomplete. In this paper, I argue that appeals to the spirit of the game do not answer the rule interpretation questions philosophers have asked them to. Nonetheless, I go on to argue that we have independent reasons for recognizing these extra-rule norms, and tentatively propose an account of how to understand them.

Urban Animals as Captives

Nicolas Delon, University of Chicago

Urban animals can benefit from the milder temperatures, tall buildings, underground dens, abundant food supplies, and protection from hunting, predation, and inclement weather afforded by cities and urban areas. But for the same reason they are especially vulnerable, as they increasingly depend on those resources. Urban animals constitute a borderline category of “liminal” animals, neither wild nor domesticated. This paper sets a further challenge by offering an account of urban animal captivity. Drawing on Lori Gruen’s account of captivity, I argue that insofar as they are confined, controlled and dependent in the relevant sense, urban animals are oftentimes captive; emphasize the various types and contexts of urban captivity; and outline the potential ethical implications of seeing urban animals as captives.

Locke on Substance, Substances, and Superadded Powers

Steven Dezort, Texas A&M University

In light of Locke’s declaration in the Essay that “GOD can, if he pleases, superadd to Matter a Faculty of Thinking” (4.3.6; 541), commentators have understood his doctrine of superaddition as how a special set of powers, such as life, thought, gravity, and perpetual motion relate to matter. Beginning with Wilson (1979), this understanding has caused commentators—most recently Wood (2016), Bolton (2016), Kim (2016), Jolley (2015), Janiak (2015), Connolly (2015), and Downing (2014)—to approach superaddition as a variant of the Cartesian mind-body issue. This approach mistakes superaddition as a relationship between superadded powers and matter, when superaddition is really a relationship between all powers and substance. I argue that Locke is a substance monist, and that all powers are superadded to substance. It is through superaddition that substance becomes “substances,” whether matter, a peach tree, or an elephant—all mentioned by Locke in his second letter to Stillingfleet.

The Role of Intentional Information Concepts in Animal Behavior Research

Kelle Dhein, Arizona State University

Philosophers working on the problem of intentionality in non-linguistic contexts often invoke the concept of biological fitness as the objective grounds for attributing intentionality to living systems. However, such biological theories of intentionality don’t square with the way experimental biologists searching for causal explanations of animal behavior use intentional concepts to guide their laboratory research. I argue that animal behavior researchers hang the concept of intentionality on goal-directed function, not the deep history of natural selection. To support that claim, I analyze the role that intentional terms play in the design and interpretation of experiments on the navigational abilities of ants. In such contexts, the use of intentional terms implies that the causal interactions constituting a living system in a given environment are subject to a higher level of functional organization. That implied organization aids the knowledge-gathering activities of researchers by constraining which causal interactions are of interest.

When Is It Unreasonable to Be Unsurprised?

J. L. A. Donohue, University of California, Los Angeles

The so-called “Merchant’s Thumb” principle is intended to help us to determine when it is reasonable to attribute a very improbable event to chance and when it is not. But it misses its mark because it cannot accommodate cases that it is designed to cover. In this paper, I suggest a modified principle that can accommodate such cases. Though we remain far from an explanation of why the universe is the way it is, we are closer to understanding why we seek an explanation for the fact that it permits life despite incredible odds. Those who would dismiss the fine-tuning argument too quickly attribute to chance events that we are justified in seeking an explanation for. Our surprise is reasonable, and the principle helps to formalize the mistake of those who are not surprised. In other words, it helps us to capture when it is unreasonable to be unsurprised.

Concordance as a Cognitive Values in Multiple-Model Scientific Practices

Austin Due, San Francisco State University

Two distinct areas of discussion in recent philosophy of science literature have focused on robustness analysis and the role of cognitive values. Robustness analysis is viewed by many as a possible way to ground realism, or at least as a way to point out reliability. In particular, inferential robustness analysis can be used in climate science to yield predictions, and insofar as it succeeds, it might also yield confirmations of posited causal factors. Regarding cognitive values, these necessarily permeate science because of underdetermination. My argument here is not merely to point out that modeling in climate science is subject to these cognitive values, but to identify one in particular that is an ideal desideratum of the evidential support—in these cases, the models themselves—of theories: concordance. Concordance is a value insofar as it is a necessary antecedent condition instantiated by models in successful cases of robustness analysis.

Moral Education and Automaticity

Asia Ferrin, American University

The idea that good moral decisions need not always be guided by deliberation is gaining acceptance in psychology and philosophy. This presents an interesting point of reflection for theories of moral development and education. According to rationalistic models, conscious deliberation ought to be a central tool for moral education or development. I argue here, however, that given the intelligence of automaticity, we ought not focus so heavily on deliberative skills. Instead, moral education programs and pedagogy need to take more seriously to role of automatic cognitive processes in moral development. In Section I, I explain the attraction of rationalism in moral education. In Section II, I describe empirical research that challenges the emphasis on reflection and deliberation in moral education. In Section III, I offer some concrete examples of pedagogical approaches that capitalize on and further foster the intelligence of automaticity.

Quasi-Nihilism: An Epistemic, Non-epistemicist Response to the Sorites Paradox

Corin Fox, University of Virginia

Here I outline an epistemic, non-epistemicist response to the sorites paradox, which I call quasi-nihilism. Quasi-nihilism, like other responses, preserves classical logic together with its standard semantics and meta-theory. I argue that quasi-nihilism has significant benefits over other standard responses which similarly cohere with classical logic. If successful, this argument eliminates the need for proponents of classical logic to accept some controversial claims of more standard views. In particular, the view avoids some baggage with standard epistemicism and nihilism: widespread sharp cutoffs and self-undermining objections (respectively). In the first section, I formulate the sorites paradox. The second details quasi-nihilism, and contrasts it with two more standard responses. I argue that quasi-nihilism has significant benefits over epistemicism and nihilism in the third. The fourth considers objections, and details some dialectical advantages of quasi-nihilism.

Binding the Self: The Ethics of Ulysses Contracts

Andrew Franklin-Hall, University of Toronto

When we fear that our intentions will be defeated by our own future actions, we might seek to elicit another to bind us to them. Such “Ulysses contracts” are puzzling. Their general assumption is that, but for A’s prior authorization at time t1, it would be wrong for B to interfere with A contrary to A’s express will at t2. Therefore, A’s will is crucial to the justification of B’s action. But if A’s will carries such moral importance, why should B heed A’s earlier act of will and ignore his later one? I defend Ulysses contracts with a kind of “true self” account I call “Sovereign Authenticity”: A can authorize B at t1 to bind him at t2 only when A’s will at t1 is a more authentic expression of the settled aims we can contribute to him at t2 than is his actual will at t2.

What Pessimism about Moral Deference Means for Disagreement

James Fritz, Ohio State University

Many writers have recently argued that there is something distinctively problematic about deferential belief in the moral domain. Call this claim pessimism about moral deference. Pessimism about moral deference, if true, seems to provide an attractive way to argue for a bold conclusion about moral disagreement: one should not suspend judgment about moral propositions in response to mere disagreement. Call this claim steadfastness about moral disagreement. Perhaps the most prominent recent discussion of the connection between moral deference and moral disagreement, due to Allison Hills, argues from pessimism about the former to steadfastness about the latter. This paper reveals that Hills’s line of thinking is unsuccessful. It closes by drawing a more general lesson for an approach to deference that unites many pessimists: a focus on the importance of moral understanding is better-suited to support conclusions about moral deference than to support steadfastness about moral disagreement.

Counterfactuals, Centering, and the Gibbard-Harper Collapse Lemma

Melissa Fusco, Columbia University

Egan (2007) argued that while Causal Decision Theorists "hold fixed [their] initial views about the likely causal structure of the world," there are cases where agents should *not* hold such views fixed in deliberation. However, a simple lemma from Gibbard and Harper 1978 shows that if an agent is probabilistically coherent, and the semantics for counterfactuals obeys Centering, Cr(a → s | a) reduces to Cr(s|a). It follows that Egan's view of such cases collapses into classical EDT; by Gibbard and Harper's result, the work of putting causal information into a simple EDT system is exactly undone by adding conditionalization. I explore the altered landscape for both views, considering, along the way, some new cases in which agents’ acts seem to give them knowledge of causal relations.

Leibniz on Contingency and Individual Concepts

Juan Garcia, Ohio State University

Despite insisting on contingency, Leibniz endorsed theses which seem to entail that all properties are had essentially—i.e., superessentialism. One such thesis is Leibniz’s doctrine that every substance has an individual concept that includes predicates denoting everything that will ever happen to it. After developing an argument for superessentialism from this doctrine, I argue that Leibniz has the theoretical tools to block this argument and retain an intelligible sense of contingency. I develop Leibniz’s idea that contingent truths can be included in individual concepts. This can be done, I insist, by including predicates denoting what an agent will do and predicates denoting unexercised powers to do otherwise. To avoid superessentialism, I further suggest that the content of individual concepts counterfactually depends upon how agents exercise their powers. I also develop a model in which this kind of counterfactual dependence respects the thesis that concepts are individuated by their content.

Quantifiers, Demonstratives, and Compositionality

Geoff Georgi, West Virginia University

Some philosophers have argued that in virtue of the semantics of quantification, a semantic assignment of propositions to sentences of English relative to contexts cannot be compositional. In this paper, I argue that this is a mistake. Compositionality principles governing the semantics of natural language must be consistent with the semantics for demonstratives, but a semantics for quantification that assigns propositions to sentences relative to contexts satisfies any compositionality principle consistent with a philosophically sound semantics for demonstratives.

Valid Consent and the Altruistic Misconception

Heather Gert, University of North Carolina at Greensboro

The Altruistic Misconception is the misconception that phase 3 medical trials are approved only when it is probable that, if successful, the drug tested will provide future patients with benefits not provided by medications already on the market. IRBs have an obligation to require researchers to disabuse potential participants of this misconception, especially in instances when researchers recognize the drug being tested is unlikely to differ significantly from what is already available.

Early Modern Neurons: Swedenborg's Doctrine of the Cerebellua

Bryce Gessell, Duke University

Swedish scientist Emanuel Swedenborg developed a theory of brain function based on cerebellula. Cerebellula are “little tiny brains” whose activity governs sensation and cognition in the cortex. The functional properties of cerebellula are similar to those of neural cells in modern theories. For this reason, I argue that Swedenborg’s view—virtually unknown in his time and our own—is the clearest legitimate precursor to the neuron doctrine.

Modeling Chisholm's Logic of Obligation, Requirement, and Defeat

Marian Gilton, Independent Scholar

This paper investigates the foundations of Chisholm’s deontic logic as an alternative to standard deontic logic. Over the course of his career, Chisholm articulated a set of axioms for a deontic logic, and he gestured toward its connection with his work with Sosa on a logic of preference. Although Chisholm’s work discussed the motivations and applications of his axioms, he never provided a model for them. This paper's chief aim is to construct a model for Chisholm's deontic logic defined in terms of the Chisholm-Sosa preference logic. This is done by combining techniques from modal logic and social choice theory. Since Chisholm's system is less familiar today than standard deontic logic, the paper begins with a thorough reconstruction of his axiomatization. Finally, having shown that Chisholm’s logic is a viable alternative to standard deontic logic, this paper shows that his system evades Chisholm’s paradox in a novel way.

Pluralist Structural Realism: The Best of Both Worlds?

David Glick, Ithaca College

John Worrall famously claimed that structural realism is the best of both worlds; it enables one to endorse the best arguments for scientific realism and antirealism. In this paper, I argue that structural realism also enables one to combine two other seemingly inconsistent positions: realism and pluralism. Indeed, the very features which form the basis of the structuralist reply to the problem of theory change may be applied synchronically to allow for a pluralist structural realism.

Realizing Race

Aaron Griffith, College of William and Mary

A prominent way of explaining how race is socially constructed appeals to social structures and social positions. On this account, the construction of one’s race is understood in terms of one’s occupying a certain social position in a social structure. The aim of this paper is to give a metaphysically perspicuous account of this form of race construction. Working on an analogy with functionalism about mental states, I develop an account of a “race structure” in which various races are functionally defined social positions. Individual persons occupy these social positions by “playing the role” characteristic of those positions. The properties by which a person “plays” a race role, are the realizers for one’s race. I characterize the social construction of one’s race in terms of a realization relation that satisfies a “subset” condition on a person's social powers. Races, on this view, are functionally defined, multiply realizable social kinds.

The Value of Fidelity in Adaptation

James Harold, Mount Holyoke College

The adaptation of literary works into films has been almost completely neglected as a philosophical topic. I discuss two questions about this phenomenon: 1) What do we mean when we say that a film is faithful to its source? 2) Is being faithful to its source a merit in a film adaptation? In response to (1), I set out two distinct senses of fidelity: story fidelity and thematic fidelity. (There are, of course, other senses of fidelity as well.) I then argue, in response to (2), that thematic fidelity, but not story fidelity, is an aesthetic merit in a film adaptation. The key steps in this argument involve showing that merely preserving the story from one medium to another does not typically involve an aesthetically significant accomplishment, whereas preserving a theme across different media does.

Distributed Group Knowledge and Epistemic Luck

Keith Harris, University of Missouri

We sometimes ascribe knowledge to groups even when no individual group member possesses the knowledge in question. Distributed accounts of group knowledge take these ascriptions seriously. One objection to such views is that they improperly divorce what a group knows from how that group can be expected to act. I raise an additional concern for distributed accounts of group knowledge. For any large group, it appears that distributed group beliefs, even if true, are likely to be true due to epistemic luck. This paper is in two parts. First, I lay out an argument to the effect that, on the face of it, the luckiness of true distributed beliefs prevents such beliefs from achieving the status of knowledge. I then argue that, although the argument from luck undermines some purported cases of distributed knowledge, some groups are capable of possessing knowledge because their beliefs are not merely luckily true.

Beyond Spatial Orientation: Merleau-Ponty’s Account of Perspective

Rebecca Harrison, University of California, Riverside

In the Phenomenology of Perception, Merleau-Ponty writes, “My point of view is for me much less a limitation on my experience than a way of inserting myself into the world in its entirety” (PoP 345). What does he mean by this? While philosophers have often thought of “perspective” as at least in part a certain limitation on a particular subject’s perceptual experience of the world, Merleau-Ponty thinks of perspective as the means by which we can have perceptual experience at all. Additionally, Merleau-Ponty’s view involves a much richer notion of perspective than mere spatial orientation, one that includes not just the subject’s orientation in space but also her orientation in historical time, in culture, in personal history and values. In this paper, I will explore what a Merleau-Pontyan “point of view” looks like, and what consequences this richer notion of a “point of view” has for his theory of perception.

Assertability Semantics for Conditionals, Quantification, and Modality

Peter Hawke and Shane Steinert-Threlkeld, Universiteit van Amsterdam

We exhibit a novel semantic framework in the state-based tradition, aimed at capturing subtle interactions in natural language between indicative conditionals, basic quantifiers and epistemic modals. We show that our framework accounts for linguistic data that a more orthodox framework cannot accommodate. In particular, it uniformly addresses challenging data recently emphasized by Gillies (on conditionals), Willer (on conditionals) and Yalcin (on epistemic modality de re). This yields foundational morals. First, the framework is a static alternative to recent attempts to accommodate some of the data with a dynamic semantics. Second, our framework is naturally interpreted as governing assertability rather than truth per se, and seems a promising basis for an expressivist semantics.

Immoral Realism

Max Hayward, Bowling Green State University

Nonnaturalists realists are committed to the belief, voiced by Parfit, that if there are no nonnatural facts, then nothing matters. But it is morally objectionable to conditionalise all our moral commitments on the question of whether there are nonnatural facts. Nonnatural facts are causally inefficacious, and so make no difference to the world of our experience. And to be a realist about such facts is to hold that they are mind-independent. It is compatible with our experiences that there are no nonnatural facts, or that they are very different from what we think. As Nagel says, realism makes scepticism intelligible. So the nonnaturalist must hold that you might be wrong that your partner matters, even if you are correct about every natural fact about your history and relationship. To hold that conditional attitude to your partner would be a moral betrayal. So believing nonnaturalist realism involves doing something immoral.

Provocateurs and Their Rights to Self-Defence

Lisa Hecht, Stockholms Universitet

A provocateur does not pose a threat of harm. Does a provocateur thus retain the same defensive rights as an innocent person if her provocation is met with a violent response? In recent work, Kimberly Ferzan has argued that, despite not being liable to harm, a provocateur forfeits her right to self-defence. Rights forfeiture is explained by the fact that the provocateur engages in conduct that she knows may produce a violent response. Contra Ferzan, I argue that provocation cases are not categorically distinct from aggression cases. In both types of case, appeal to moral responsibility for an unjust threat can explain defensive rights forfeiture. Even though a provocateur does not pose a direct threat, she responsibly contributes to the creation of an unjust threat and she therefore forfeits some of her defensive rights. The extent of her contribution will determine the extent to which she forfeits her defensive rights.

Mental Disorder as a Puzzle for Constitutivism

Diana Heney, Fordham University

In this paper, I bring Aristotle’s constitutivist frame into conversation with contemporary conceptualizations of mental disorder. In section 1, I set out the basic commitments of a constitutivist Aristotelian account and show how it generates the hypothesis that a person with mental disorder could never flourish. In section 2, I present two contemporary concepts of mental disorder—one from the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders, and one from philosopher George Graham’s The Disordered Mind. In section 3, I show that Graham’s concept of mental disorder supplies the basis for a compelling response. In section 4, I show that Aristotle’s discussion of death in Book I can be construed as supporting a second response. To the extent that the solutions sketched here succeed, constitutivists can avoid seeing the prevalence of mental disorder and the possibility of recovery from such disorder as a puzzle.

Deviant Assertion: When Relying on Accommodation Is Preferable

Samia Hesni, Massachusetts Institute of Technology

Sometimes, it seems better to presuppose than not. We are familiar with cases when presupposition accommodation is licensed. You ask me to dinner, I say, “I’m sorry; I have to pick up my brother.” But we can make the stronger claim that in this scenario, presupposition is not only licensed, but preferred. If you ask me to dinner and I say “I’m sorry; I have a brother and I have to pick him up,” that is considerably stranger—and, arguably, less cooperative. Current theories of presupposition accommodation explain when and why it is appropriate —or not defective—to rely on presupposition accommodation. But there’s a second, less discussed, question about when and why it is inappropriate not to rely on accommodation. I suggest constraints in response to the second question.

The Perils of P-Hacking and the Promise of Pre-Analysis Plans

Zoe Hitzig, Harvard University, and Jacob Stegenga, University of Cambridge

One response to the problems of publication bias and data dredging (or ‘p-hacking’) in the medical and social sciences is to demand that researchers formulate pre-analysis plans for their empirical studies. In a pre-analysis plan, researchers specify, in advance, the measurements and statistical analyses they plan to perform, committing themselves to specific statistical hypotheses prior to data analysis. Pre-analysis plans are used widely in the medical sciences but have only begun to gain traction in empirical social science research in the last decade. We articulate the epistemic harm of data dredging and the epistemic value of pre-analysis plans using the resources of Bayesian confirmation theory and model selection theory. In practice, scientists often depart from pre-analysis plans, and we describe the conditions under which pre-analysis plans can be valuable despite such departures.

Pluralist Accounts of Knowledge and the Threshold Problem

Daniel Immerman, Independent Scholar

In general, where does the threshold lie between knowledge and lack thereof? The task of adequately answering this question is known as “the threshold problem.” The main goal of this paper is to use the threshold problem to criticize monist accounts of knowledge and support pluralist ones. In the context of discussing the weaknesses of monist accounts and the strengths of pluralist ones, I will, among other things, put forward and evaluate some explanations of the significance of knowledge.

The Problem of Creation and Abstract Artifacts

Nurbay Irmak, Boğaziçi University

Abstract artifacts such as musical works and fictional entities are human creations; they are intentional products of our actions and activities. One line of argument against abstract artifacts is that abstract objects are not the kind of objects that can be created. This is so, it is argued, because abstract objects are causally inert. Since creation requires being caused to exist, abstract objects cannot be created. One common way to resist this argument is to reject causal inefficacy of abstracta. I will argue that creationist should rather reject the principle that creation requires causation. Creation, on my view, is a non-causal relation that can be explained using an appropriate notion of ontological dependence. The existence and the creation of abstract artifacts depend on certain individuals with appropriate intentions, and events of a certain kind, including but not limited to creations of certain concrete objects.

The Role of Moral Knowledge in Pierre Bayle's Skepticism

Kristen Irwin, Loyola University Chicago

Though recent attention to Pierre Bayle has focused on the nature of his skepticism, little has been said about moral knowledge in Bayle, given that skepticism seems to preclude its possibility. I argue that Bayle should be read as a qualified Academic skeptic, and that this has important implications for the possibility of moral knowledge. First, insofar as moral beliefs are justified by bon sens (“good sense”), their justification is merely pithanon (plausible), never certain. Second, well-founded moral beliefs can only come from droite raison (“right reason”). Bayle’s willingness to ascribe certainty and universal accessibility to basic moral beliefs is because of their deliverance by right reason. Bayle discounts the reliability of reason with respect to non-moral beliefs, but he never questions it with respect to the moral truths of right reason. This means that insofar as these moral beliefs are indeed from right reason, we have certain moral knowledge.

Two 3D Cohabitants Are One Too Many Thinkers

Amir Arturo Javier-Castellanos, Syracuse University

According to the cohabitation account, all persons resulting from a given fission event cohabitate the same body prior to fission. According to endurantism, material objects persist by being wholly located at different times. In this paper, I discuss a problem for the combination of these views. Imagine you have Steve Buscemi sitting to your right and James Gandolfini sitting to your left. They are both suffering from a bad migraine. Buscemi is in 20 dolors of pain, whereas Gandolfini is in 30 dolors of pain. However, Buscemi is scheduled to undergo fission next Monday. You have a pill that will temporarily relieve the pain, which you have the option of giving to one of them. Who should you give it to? Given the two views in question, the answer implausibly depends on whether Buscemi will undergo fission next Monday.

Paradoxical Desires

Ethan Jerzak, University of California, Berkeley

I present a paradoxical combination of desires. I show why it's paradoxical, and consider ways of responding to the paradox. I show that we're saddled with an unappealing disjunction: either we reject the possibility of the case, or we reject some bit of classical logic. I argue that denying the possibility of the case is unmotivated on any reasonable way of thinking about mental content. So, I argue, the best response is a non-classical one, according to which certain desires are neither determinately satisfied nor determinately not satisfied. I conclude by exploring connections with more traditional semantic paradoxes, and argue that this paradox requires a new way of thinking about propositional attitudes.

Species of Pluralism in Political Philosophy

Kyle Johannsen, Trent University

“Pluralism” is a familiar name in political philosophy. From John Rawls’s worries about reasonable pluralism and its implications for legitimacy, to Isaiah Berlin’s radical plurality of conflicting values, the name “pluralism” frequently rears its head. However, though they use the same term, theorists often have different things in mind. Whereas reasonable pluralism is concerned with different citizens possessing different conceptions of justice and the good; value pluralism is about the structure of practical reasoning. In this paper, I argue that value pluralism is part of the best explanation for reasonable pluralism. However, unlike other philosophers who have argued for this claim, I argue that it is compatible with the view that state coercion must be justified to citizens without relying upon controversial moral premises. Though it would be problematic if value pluralism were accorded a justificatory role in political liberalism, according it an explanatory role is quite different.

Reasons for Love: Against the Substitution Argument

Carter Johnson, Arizona State University

Do people love one another for non-relational qualities, such as wit and virtue? Or do people love one another for relational qualities, such as a shared history? The substitution argument is often taken to rule out the former. It proposes a thought experiment in which the beloved is replaced by a perfect duplicate. Since you would not accept this duplicate as a substitute for the beloved, non-relational qualities of the beloved allegedly do not provide reasons for love. This paper shows why the argument is weak. Most importantly, the argument conflates several kinds of reason: reasons for love are distinct from reasons for holding on to a beloved, which in turn are distinct from reasons for commitment. I explain each of these kinds of reason. A discussion of the argument’s weakness begins philosophical work on these distinct kinds of reason and shows the promise of future work on the same.

Only Knowers Live Well

Brian Kim, Oklahoma State University

Explorations of the eudaimonic value of knowledge have concluded that while there are contingent connections between well-being and knowledge, being a knower is not necessary for human flourishing. In opposition, argue that if, as many believe, achievement is necessary for well-being, then knowledge is necessary for well-being by way of being necessary for achievement.

Is Normative Consent a Theory of Authority

Chris King, Miami University of Ohio

There are at least two explanations for why one agent could have a duty to do as another one directs. The first explanation is that one has such a duty when the directives enable one to do what one ought to have done on independent grounds. Doing as directed depends on the content derived from these grounds. The second explanation is authority. If there is authority, one has a duty to do as one is commanded on the grounds of the command. The first explanation is more intuitively plausible. It presents a coherent explanation of the relation between commands, their content and the duty. For a variety of reasons, the theory of authority often fails to explain a content duty to obey. However, by discussing often overlooked features of Normative Consent, I will argue that authority can explain such a duty at least as well for many cases.

Revisiting the Substance-Artifact Distinction: Or, Why Aristotle Went Organic Before It Was Cool

James Kintz, Saint Louis University

The difference between substances and artifacts has always been important within Aristotelian metaphysics. Aristotle gave ontological priority to substances, demoting non-substantial beings like artifacts to a secondary ontological status. Yet this Aristotelian framework is somewhat counterintuitive, for we tend to think of artifacts like tables and houses as having the same “level” of being as substances like trees and cats. In this paper I briefly recount what are often taken to be the salient differences between substances and artifacts for Aristotle, arguing that these criteria do not adequately distinguish between these different sorts of being. Nevertheless, I maintain that there are important differences between substances and artifacts—viz. substances have epistemological priority, ontological independence, and the principle of self-realization. Thus whatever else may be true of artifacts, substances are still ontologically primary. I close by noting the importance of these three criteria of substancehood for contemporary hylomorphism.

Slurs are Directives

Cameron Domenico Kirk-Giannini, Rutgers University

Recent work on the semantics and pragmatics of slurs has explored a variety of ways of explaining their potential to derogate, with the two most popular families of approaches appealing to either (i), the propositions semantically or pragmatically communicated by, or (ii), the non-cognitive attitudes expressed by, speakers who use them. I argue that no approach in either family can be correct. I then propose an alternative semantic treatment of slurs, according to which they are semantically associated with both descriptive and directive content. On the view I defend, when speakers use slurs, they simultaneously propose to add an at-issue proposition to the conversational common ground and issue a not-at-issue command to their interlocutors to adopt a derogatory perspective toward members of the targeted group. This proposal avoids the problems faced by other accounts.

Scientific Social Ontology? Objecting to Kincaid on the Existence of Social Classes

Richard Lauer, St. Lawrence University

Much discussion about social ontology is concerned with broadly intuitive a priori considerations, as well as technical tools drawn from analytic metaphysics. However, Kincaid (2015) has argued for social realism by motivating a realistic interpretation of social scientific theories, in particular those theories concerned with the existence of ruling classes. This paper uses Kincaid’s discussion to evaluate the prospects of a scientific social ontology. Because scientific ontology presupposes scientific realism, I will object to Kincaid’s argument by addressing the plausibility of realism about social scientific theories. I present three objections to Kincaid’s argument, ultimately expressing pessimism about the prospects for a scientific social ontology.

From the Fixity of the Past to the Fixity of the Independent

Andrew Law, University of California, Riverside

The principle of the Fixity of the Past (FP) holds, roughly, that if an agent’s performing a certain action requires the past to be different, then the agent cannot perform the action in question. While FP is controversial, many admit it is quite intuitive. Recently, though, several authors have argued that FP ought to be abandoned in favor of another principle, sometimes called the Fixity of the Independent (FI). According to FI, it is not past facts per se which are fixed. Rather, it is facts that are not dependent on the agent’s present action(s) which are fixed. In this paper, I present two new arguments for the claim that those who are sympathetic to FP ought to abandon it in favor of FI. The first argument appeals to relativistic physics. The second argument appeals to time travel.

On the Uniformity of Proper Names

Junhyo Lee, University of Southern California

Predicativism is the view that names are predicates in all of their occurrences. It is mainly aimed at providing a uniform account of referential and predicative uses of names. According to predicativism, proper names are semantically predicative, and referential uses can be explained by appeal to a covert determiner. However, this account cannot be extended to other data because it is not always syntactically possible to postulate a covert determiner. I appeal to two kinds of data: incorporated names and modified names. And then I offer a novel analysis of proper names. I propose that proper names are primarily referential but there is a null derivation that transforms names to predicates, working semantically like the suffix “-ian.”

Against Predicativism of Names

Jeonggyu Lee, University of California, Santa Barbara

According to predicativism of names, even names that occur in argument positions have the same type of semantic contents as predicates. In this paper, I shall argue that these bare singular names are not predicates. I first introduce predicativists’ being-named condition of names, and present three objections against it: the modal, the epistemic, and the translation objections. Then I consider Fara’s “the”-predicativism and Bach’s nominal description theory as possible responses and show why they fail.

Should Event-Causal Libertarians Prefer Mele’s Daring Libertarianism to Kane's View?

John Lemos, Coe College

Robert Kane’s event-causal libertarian view of free will understands basic, underivatively free actions as involving dual efforts of will in the moments leading up to causally undetermined choice. Like Kane’s view, Mele’s daring libertarian (DL) view understands such basic, underivatively free actions as causally undetermined up to the moment of choice, but it rejects the requirement of dual efforts of will in favor of a single effort to decide. In recent writings, Mele has argued that event-causal libertarians should prefer his DL view to Kane’s view. In this paper, I explain the differences between these two views and Mele’s argument for why his view is preferable. I also argue that Mele’s case for the rational preferability of DL is flawed in significant respects.

Counting Experiments

Jonathan Livengood, University of Illinois at Urbana

In this paper, I clarify the argument from intentions for the Likelihood Principle, and I show how the key premiss in my formulation of the argument may be resisted by maintaining that creative intentions sometimes independently matter to what experiments exist.

The Shadow of Domination: Revising Pettit's Conception of Freedom

Alyssa Lowery, Vanderbilt University

In this paper, I examine Philip Pettit’s model of freedom as non-domination, tracing his critical commentary defending this conception against Hobbesian freedom as nonfrustration and Berlinian freedom as noninterference. I then follow the steps of this commentary in examining two contemporary cases of apparent unfreedom, that of the Female Runner and that of the Terrorized Citizen. I argue that Pettit’s model of non-domination cannot accommodate these two cases on his current account, and ought to be revised to do so in two ways. One, by allowing for the possibility of a merely potential dominator rather than an immediately identifiable one, and two, by taking into account the unreliability of “creating cost” for a potential dominator. These revisions make possible a richer account of nondomination that is more readily applied to contemporary instances of unfreedom.

How Many Aims Are We Aiming At?

Joshua Luczak, Leibniz Universtität Hannover

This paper highlights that the aim of using statistical mechanics to underpin irreversible processes is, strictly speaking, ambiguous. Traditionally, however, the task of underpinning irreversible processes has been thought to be synonymous with underpinning the Second Law of thermodynamics. This paper claims that contributors to the foundational discussion are best interpreted as aiming to provide a microphysical justification of the Minus First Law, despite the ways their aims are often stated. This paper also suggests that contributors should aim at accounting for both the Minus First Law and Second Law.

Shaping Patterns into Structures: Protein Science and the Mobility of Data

Dana Matthiessen, University of Pittsburgh

In a recent work by Leonelli, features of database curation and use are taken to demonstrate that data can function as non-local evidence. Non-local evidence supports claims about phenomena in novel research contexts without requiring reference to the conditions in which those data were generated. Drawing on protein science as a case, I argue that such reuse of experimental results enabled by databases cannot be fully understood without an account of the strategies used to rework initially recorded data into a processed form that has greater inferential power. This can only be done if researchers possess a sufficiently detailed understanding of the local context in which the data was produced.

Does Peer Disagreement Warrant Moral Skepticism?

Joshua May, The University of Alabama at Birmingham

Fundamental moral disagreements lead many moral skeptics to reject any claims to moral knowledge. Whether disagreements undermine moral knowledge, rather than just objectivity, depends on whether the disputants are epistemic peers. Some prominent responses to moral disagreement argue that one can rationally remain steadfast in light of it, but I draw on empirical research to develop a different line of response. The evidence suggests that few moral disagreements meet the relevant criteria of being both foundational and among epistemic peers. The threat is largely limited to only some of our most controversial moral beliefs. I conclude that, while some intellectual humility is warranted on the basis of peer disagreement, global moral skepticism isn’t.

What's Aggressive about Microaggressions?

Emma McClure, University of Toronto

Microaggressions are unintentional racial- and gender-based slights that can cause significant damage when repeated over time and across a variety of contexts. The existence and seriousness of microaggressions have begun to gain wide acceptance both within and beyond academia, but recently, Scott O. Lilienfeld has raised a challenge to the microaggression research project: what’s “aggressive” about microaggressions? Derald Wing Sue, the psychologist who has spearheaded the research on microaggressions, does not currently have an answer to this challenge. He needs a new theoretical framework to capture a kind of aggression that does not depend upon intent. I suggest turning to Bonnie Mann’s paper, “Creepers, Flirts, Heroes and Allies,” for a richer theoretical framework. Combining Mann’s discussion of creepiness with Sue’s discussion of other forms of subtle, gender-based discrimination would allow Sue to answer Lilienfeld’s objection and defend the legitimacy of the concept, “microaggression.”

The Necessity of Genuine Triadic Relations

William McCurdy, Idaho State University

A genuine triadic relation is a three-relata relation which cannot be analyzed into combinations of relations of any smaller adicity. Although one of the major pioneers of the algebra of logic, C.S. Peirce, contended that there are genuine triadic relations, contemporary logicians almost universally disagree. The grounds for this denial will be shown to be the consequence of an inadequate conception of relations in general and of triadic relations in particular. The poison well-spring is the Kuratowskian definitions of both n-adic relations and n-tuples. The Kuratowskian account will be critiqued and Peirce’s contention defended.

Meddling in the Work of Another

Brennan McDavid, University of Melbourne

Plato's definition of justice in the Republic has two conjuncts: “doing one's own work” and “not meddling in the work of another.” I argue that it is a mistake to think that the two conjuncts entail one another. It is possible to do one’s own work and also proceed to meddling in another’s. There is nothing in the concept of doing one’s own work that rules out the meddling. Plato likely would have observed this, and so his insistence that each part in a just composite must not meddle in work of the other parts must be making a point that is distinct from, and not merely a repetition of, the point about each part doing its own work.

A Nietzschean Solution to Korsgaard's Problem of Action

Meredith McFadden, Lawrence University

Korsgaard famously posits a problem of action for reflective agents. To solve this problem, agents face the task of unifying themselves according to a principle so the action resulting from their reflection to count as genuinely their own. Korsgaard claims that reflective agents are committed to choosing a principle that would remain stable in the face of changes to the agent’s drives and circumstances in the world because they are committed to adopting a principle. I propose that Nietzsche’s view of agential unification provides a story of how agents can develop a normative structuring of their drives that is sufficient for solving the problem of action Korsgaard outlines without satisfying her Kantian standards. It not only brings out the problems in her approach to the problem of action, but draws attention to the ways in which normativity applies to reflective agents in the context of particular actions and across actions.

Identifying Exploitation and Providing Care: Unifying the Capabilities Approach and Feminist Contractarianism

Lavender McKittrick-Sweitzer, Ohio State University

Perhaps the most attractive feature of the capabilities approach is its ability to handle issues pertaining to care and the achievement of sex equality, issues that social contract theory is widely regarded as ill-equipped to handle. (Nussbaum, 2003, p. 36) And yet, I want to argue that the capabilities approach and social contract theory—particularly feminist contractarianism (Hampton, 1993a)—are not mutually exclusive approaches to social justice. Rather, when they are united, a robust social theory emerges. On one hand, feminist contractarianism is appropriately completed by the capabilities approach, becoming a full political theory that can then address issues of care and sex equality. On the other hand, the capabilities approach gains the contract test, granting individuals the ability to recognize those relationships that they might productively engage in to fulfill their fundamental entitlements, and the fundamental entitlements of those around them.

The Panpsychism Question in Merleau-Ponty’s Ontology

Jennifer McWeeny, Worcester Polytechnic Institute

Does Merleau-Ponty’s notion of flesh entail pansychism? Does an ontology of flesh, properly understood, imply that a kind of mind is present in all that exists? These questions hinge on whether we should understand the perceptual relationship between self and world as symmetrical, where each term is capable of what the other is capable of, or as asymmetrical, where sentience is attributed to humans and most living beings but not to rocks and other seemingly inanimate objects. This paper argues, with reference to Merleau-Ponty’s appropriations and modifications of panpsychist elements from the respective metaphysics of Leibniz and Scheler, that there is ultimately more evidence for a panpsychist reading of flesh than not. In short, Merleau-Ponty could not account for reciprocal expression between particulars, nor could his ontology move beyond the consciousness-object distinction, unless all beings were both sensible and sentient, that is, unless panpsychism obtains.

Political Incompetence and Defenses of Democracy

Benjamin Miller, University of Illinois at Urbana-Champaign

Much past and present work in political psychology casts serious doubt on the competency and knowledge of individual citizens. This work has mostly been ignored by scholars offering defenses of the intrinsic value of democracy, though recent (partially) instrumental defenses of democracy by epistemic proceduralists have taken such challenges more seriously. I argue that we need to carefully explore the possibility that systematic and persistent failures in citizens’ competency and knowledge do not only potentially undermine the superiority of democracy as a form of decision-making in the epistemic sense, but that they may undermine, or at least damage, defenses of democracy’s intrinsic value as well. I argue that the empirical findings on citizen incompetence, especially findings about the cognitive decision-making biases of most adult human beings, call into questions two broad types of defenses of democracy—Fairness and Equal Standing defenses.

The Platonic Model 2.0

Benjamin Mitchell-Yellin, Sam Houston State University

I offer some refinements to Gary Watson’s Platonic Model of human agency and show how they help it to overcome perhaps the most forceful objection to the view, namely, that it fails to properly account for “perverse” cases.

Multilevel Social Contract Theory

Michael Moehler, Virginia Tech

I develop a novel multilevel social contract theory that, in contrast to orthodox multistage social contract theory, is tailored to the conditions of deeply morally pluralistic societies. The theory defines the minimal behavioral restrictions that are necessary to ensuremutually beneficial peaceful long-term cooperation compared to violent conflict resolution in deeply morally pluralistic societies. I show that multilevel social contract theory represents a distinct approach in moral philosophy that helps to address some fundamental problems associated with orthodox (rational choice) multistage social contract theory and helps to reconcile different approaches within this tradition. More specifically, I argue that morality in its best contractarian version for the conditions of deeply morally pluralistic societies entails Humean, Hobbesian, and Kantian moral features.

What May I (Epistemically) Hope For?

Andy Mueller, Goethe-Universität Frankfurt

This essay pursues two related goals. The first is to argue for an epistemic norm for rationally permissible hope. This norm gives a necessary epistemic condition on rational hope, abstracting from the practical aspects of hope. First, I introduce the standard account of rational hope and an objection to it based on lottery cases. Then, to replace the standard account, I develop the following proposal to which the notion of knowledge is central: It is rationally permissible to hope that p only if one is not justified in believing that one is in a position to know that not-p. Finally, turning to my second goal, I explain how this proposal suggests a novel way to vindicate an aspect of the knowledge-first program. The concept knowledge plays an uneliminable role in our thinking about what we can rationally hope for and thus in regulating an important part of our mental lives.

What's Wrong with Accurate Statistical Generalizations?

Jessie Munton, New York University

What, if anything, is wrong with statistically accurate generalizations about certain demographics? Does the intuitive discomfort we feel with accurate generalizations around race and gender, for instance, reflect mere political correctness, or is it a symptom of a legitimate flaw with the beliefs which encode these statistics? This paper locates a distinctively epistemic flaw that can arise in accurate statistical beliefs of any kind, including those pertaining to demographic generalizations. The flaw lies in the broader explanatory structure responsible for setting limits on the domain of individuals to which the statistic is taken to apply. The account is perfectly general, and applies to both politically charged and uncharged beliefs, but can also explain why demographic generalizations carry the charge they do.

A Problem for Fundamental Atomic Facts

Daniel Murphy, SUNY Cortland

Supposing we countenance the notion of a “fundamental” fact, there are various disputes about what kinds of facts are fundamental. (For example, are any facts about mental goings-on fundamental? What about moral ones? etc.) Now here’s one view about fundamental facts that might seem attractive, and that can be shared by philosophers who otherwise significantly disagree about fundamental reality: the fundamental facts are (largely if not exclusively) atomic (e.g., the fact that a is F, that a and b are R, etc.). Since this view concerns the form of fundamental facts, it leaves a lot open about fundamental reality. But I think this view is mistaken, and in this paper I develop and defend a particular problem for it. The basic idea is that, if we countenance the notion of “essential” features of fundamental things, the view that there are fundamental atomic facts runs into a problem concerning overdetermination.

Prediction in Science: How Big Data Does—and Doesn’t—Help

Robert Northcott, Birkbeck College London

Have big data methods revolutionized our ability to predict field phenomena, i.e., phenomena outside of the laboratory? When are its predictions successful? I draw on three case studies—of weather forecasting, election prediction, and GDP forecasting—to go beyond existing philosophy of science work. Their overall verdict is mixed. Two factors are important for big data prediction to succeed: underlying causal processes must be stable, and the available dataset must be sufficiently rich. However, even satisfying both of these conditions does not guarantee success. Moreover, the case studies also illustrate some of the reasons why the conditions may not be satisfied in any case. A final lesson is that when predictive success is achieved, it is local and hard to extrapolate to new contexts. More data has not countered this trend; if anything, it exacerbates it. There is reason to think this true of field cases generally.

No Time for the Hamiltonian Constraint

Joshua Norton, American University of Beirut

The theory of loop quantum gravity (LQG) is one of the leading contenders for a theory of gravity at the Planck scale, and yet like all such contenders—string theory, causal set theory—LQG is fraught with a variety of difficulties. Most of these difficulties are technical in nature and are not burdened by the conceptual angst inherent in what has come to be known as “the problem of time.” According to this problem, time is described by LQG as being “frozen” or missing from the world. In the following, I will address the problem of time by highlighting dubious assumptions about the essential nature of time. I will argue that the problem of time results not from the Hamiltonian constraint, as is often claimed, but due to our interpretation of LQG.

Ethical Justification in the Twentieth Century

Onora O'Neill, University of Cambridge

Nobody doubts that the twentieth century saw many challenges to ethics, including various forms of subjectivism, relativism and (of course) logical positivism. The conventional story also claims that the crisis is past, that ethics has been revived and that this is shown by the great flowering both of normative political philosophy and of “applied” ethics, not merely in philosophical work but in the discussions and practice of many professions and activities and in public discussion. I shall suggest that the crisis has been rather deeper and more persistent than this picture suggests, and that its most persistent feature has not been uncertainty about justifications—much as that matters—but widespread acceptance of a reversal of perspectives that prioritises rights over duties, which obscures many types of ethical claim. We still need to think carefully about the justifications that can be given for ethical principles, and about the extent to which principles can be used to shape practice.

The Biological Reality of Race Does Not Underwrite the Social Reality of Race

Kamuran Osmanoglu, University of Kansas

Quayshawn Spencer (2014) defends the biological reality of “race.” He argues that “race” as used in the current US racial discourse picks out a biologically real entity. First, Spencer says that the current US census classification yields five different races. Second, he argues that recent human population genetic research also yields an interesting level of genetic clustering at the K=5 level. Thus, he contends that the current US racial discourse matches nicely with recent genetic population clustering results. Therefore, he argues that “race”, in its US meaning, picks out a biologically real entity. However, we argue that Spencer’s argument fails to prove that “race” is a biologically real entity in a broader sense. Moreover, this broader sense of “race” is much more interesting than the US sense, and does much better justice to the social reality of universal race discourse. Furthermore, there are internal worries with Spencer’s argument.

Physicalism and Fundamental Mentality: Saying No to the No Fundamental Mentality Constraint

James Otis, University of Rochester

In this paper I argue against Jessica Wilson's "no fundamental mentality" (NFM) constraint on formulations of physicalism. Wilson and Janice Dowell have each offered formulations of physicalism that are similar except that Wilson includes the additional NFM constraint—i.e., that whatever it means to be physical, the physical cannot be fundamentally mental. I reject this constraint by first showing that Wilson's pragmatic considerations in favor of the constraint fail. I then offer a further consideration based on the primary argument for physicalism—a success-of-physics argument. This argument implicitly undermines the NFM and so we should reject the constraint. This means that physicalism is compatible with the existence of fundamental mental entities in some cases.

Why Moral Judgment Is Not a Natural Kind

John Jung Park, Oakland University

Numerous philosophers and psychologists have been debating whether moral judgment is a natural kind that is a real category that reflects the structure of the natural world. I posit a novel evolutionary argument against the view that moral judgment is a natural kind. Insofar as this natural kinds view fails and there also is additional evidence against the general position that moral judgment is a natural kind, I conclude that what the moral domain is reflects the interests and actions of human beings. Moral judgment, understood as a psychological state, is an artifact kind.

The Mansplainer: Epistemic Peacock, Knowledge Fraud, Credibility Thief

John Partridge, Wheaton College Massachusetts

Rebecca Solnit’s essay, “Men Explain Things to Me,” offers a rich and important account of “mansplaining.” I hope to provide additional conceptual clarification and philosophical analysis of Rebecca Solnit’s concept. I show first that the objectification and silencing that mansplaining produces compares it to Miranda Fricker’s notion of preemptive testimonial injustice. Then, drawing on the work of José Medina and other epistemologists of ignorance, I show that the mansplainer is distinctive among other bad explainers residing in our social imaginary. Unlike the Wine Bore and Captain Obvious, he is an epistemic fraud and a credibility thief. He pretends to possess the primary epistemic good of being a knower in order to acquire its social meaning: credibility and authority. I conclude by briefly comparing and contrasting mansplaining and “bropropriation.” I also show how Solnit’s essay both depicts and enacts a compelling act of epistemic resistance called “amplification.”

Mere Manifestations of Agency?

Jonathan Payton, University of Toronto

Standard metaphysical theories of action assume that actions are events. Thus, they have a problem with “negative” actions, in which an agent manifests her agency by not doing a certain thing: negative actions seem to be, not events, but absences thereof. Some philosophers respond that, while these behaviours are manifestations of agency, they are not, strictly speaking, actions. I argue that the distinction between those manifestations of agency that are actions and those that are not cannot be drawn. Agency, I argue, is the power to act, and so, necessarily, all of its manifestations are actions (Section 2). I consider an alternative view, on which agency is a higher-level property rather than a power, and show that a variant of my argument can be mounted which incorporates it (Section 3). Thus, there are no “mere manifestations of agency”, and a satisfactory theory of action must accommodate negative actions.

What Poetry Teaches

Antonia Peacocke, University of California, Berkeley

Many great writers have said that literature can give us new phenomenal concepts. But many philosophers have argued that you have to have an experience instantiating some phenomenal quality in order to acquire a concept of it. How could we ever acquire phenomenal concepts by reading literature? In this paper I explain how this is possible, despite the relevant experiential constraint on phenomenal imagination, by arguing for two key points. First, there is another necessary condition on acquiring a phenomenal concept: you have to notice a property in your experience to form a concept of it. Second: while active phenomenal imagination is constrained by your preexisting phenomenal concepts, passive phenomenal imagination is not. Poetic comparisons like metaphors can exploit passive phenomenal imagination to help you meet the noticing condition, thus expanding the range of your active phenomenal imagination. I detail a few examples.

The Race Debate and the Work of Intuitions

Louise Pedersen, University of Utah

Can intuitions serve as reliable evidence for grounding a notion a race? In contemporary philosophy this question has not been settled, in part because an intuition based “folk concept” of race differs noticeably from a social constructionist “revisionary view” of race. In this paper I will argue that a revisionary, counterintuitive approach to a conceptual racial analysis provides better evidence for what race is and for what race should be than one that is—at least partially—founded on intuitions captured by the census survey that lead to a view of biological racial realism. More specifically, the use of regular folks’ intuitions about race as evidence for biological racial realism cannot settle the race debate, as it is ill equipped to deal with the complex questions concerning race relations in the United States.

Fitting Emotions and a New Form of Epistemic Injustice

Kathryn Pendoley, The Graduate Center, CUNY

This paper identifies an as-yet overlooked form of epistemic injustice that arises from emotional fittingness norms. To this end the paper argues that operative fittingness norms for emotion, which are sensitive to actual complex social contexts, apply differentially with respect to social identities such as race, gender, and class. Through emotional development we learn these operative fittingness norms and come to have emotions guided by them. However, since operative fittingness norms apply differentially across social groups, members of oppressed groups sometimes fail to have the very emotions that would best promote their epistemic well-being. This should be understood as a form of epistemic injustice: Oppressive fittingness norms that apply differentially according to social identities help to prevent emoters from having the very emotions that would best benefit them epistemically.

Introspection and the Attitudes

Jared Peterson, University of Wisconsin–Parkside

A number of philosophers hold that we are able to possess highly epistemically secure, evidentially-grounded knowledge of some of our attitudes via introspection. One common explanation for how we possess such knowledge is that we do so by detecting the unique phenomenology certain attitudes allegedly possess. There has, however, been no detailed accounts offered in the literature concerning how we know our attitudes on the basis of such phenomenology. In this paper, I offer such an account. I do so by developing a novel approach to the epistemology of desire, one that involves a process I call phenomenal simulation. Phenomenal simulation involves entertaining a particular mental representation and becoming aware that we are tokening a particular experience towards that representation. I explain how we are able to grasp the output of phenomenal simulation in a manner that affords us with highly epistemically secure, evidentially based introspective knowledge of our desires.

A Defense of Historical Data in Climate Science

Dasha Pruss, University of Pittsburgh

Providing confirmation for climate change theories is challenging because the amount of systematically measured climate data is quite small. Aside from waiting for confirmatory predictions, these theories can be confirmed using data about the deep past and outputs of climate models. The latter has been the basis of much discussion among philosophers of science, but to date, little attention has been given to the use of historical data in climate science. I show that the use of historical data is justifiable in spite of their sparseness and uncertainty, multiple inferential steps, and the challenge of inferring causation from historical data. Robustness helps overcome the sparseness and uncertainty of historical data; the inferential steps and assumptions in proxy data also underpin well-established scientific findings; and using experimental data and the asymmetry of overdetermination aids the difficulty of inferring causation from historical data.

Double-Halfer Embarrassment Diminished: A Reply To Titelbaum

Joel Pust, University of Delaware

“Double-halfers” think that throughout the Sleeping Beauty Scenario, Beauty ought to maintain a credence of 1/2 in the proposition that the fair coin toss governing the experimental protocol comes up heads. Titelbaum (2012) introduces a novel variation on the standard scenario, one involving an additional coin toss, and claims that the double-halfer is committed to the absurd and embarrassing result that Beauty’s credence in an indexical proposition concerning the outcome of a future fair coin toss is not 1/2. I argue that there is no reason to regard the credence required by the double-halfer as any less acceptable than the one deemed required by Titelbaum.

Aristotle on Theoretical and Practical Wisdom

Bryan Reece, University of Toronto

It is standardly held that Aristotle thinks that one can have theoretical wisdom without practical wisdom. The primary evidence for the standard view is a passage from Nicomachean Ethics 6. 7 in which he apparently claims that Anaxagoras and Thales have theoretical wisdom but not practical wisdom. I argue that Aristotle is not offering this as his own view; rather, he is reporting a reputable opinion the appeal of which his view will explain. Moreover, I argue that Aristotle in fact gives strong positive reasons to think that one who has theoretical wisdom must have practical wisdom.

Pessimism Redux

Walter Reid, Syracuse University

Schopenhauer’s pessimism holds that life is not worth living. Suffering and Insufficient Value are widely-cited reasons for pessimism. In Section One, I explicate whether suffering implies pessimism. Pessimism doesn’t follow from suffering per se. Rather, suffering follows from there being insufficient value, i.e., nothing that is ultimately meaningful. In Section Two, then, I evaluate Schopenhauer's grounds for claiming there’s insufficient value, his more fundamental reason for pessimism. Current literature acknowledges a connection between insufficient value and pessimism, but hasn’t developed that link thoroughly. I intend to develop that link, even argue in favor of it. Accordingly, my thesis is that the argument from insufficient value renders pessimism a viable stance to take within the contemporary debate concerning the value of human life.

A Higher-Order Approach to Diachronic Continence

Catherine Rioux, University of Toronto

When faced with temptation, can it be rational to follow through on your previous intentions despite now believing that you ought to succumb? On a two-tier model of practical rationality emphasizing the rationality of certain habits of non-reconsideration, the answer is no: if you have reconsidered and now judge that ought to succumb, then you are akratic if you stick to your previous intention. I present an alternative answer, drawing on the kind of evidence available to the agent when facing temptation. I argue that “higher-order evidence” concerning the agent’s deliberative malfunction provides her with an exclusionary reason. Thus, when the agent follows through on her previous decision, she responds to the reasons she should be responding to, namely, the ones she considered when she deliberated. And when she agent temporarily judges that she ought to perform the tempting action, she disregards her higher-order evidence and is irrational.

When Is the Self-Evident Evident? Thomas Reid and the Evidence of First Principles

James Dominic Rooney, Saint Louis University

Significant debate surrounds how to interpret Thomas Reid’s theory of the “self-evident” nature of first principles. What is particularly difficult to integrate into accounts of the self-evidence of first principles is that Reid lists a series of general “first principles of contingent truths.” One common view is that Reid is a pragmatist about the self-evidence of first principles. But, I will argue, this fails to make sense of how there can be first principles which are both general in form and have contingent truth as their object. I conclude by suggesting that Reid should be interpreted as proposing a (novel) “proper function reliabilist” view of self-evidence.

Frankfurt, Free Will, and the Problem of Self-Control

Adina Roskies, Dartmouth College, and Ryan Cummings, Independent Scholar

Frankfurt’s compatibilist account of free will considers an individual to be free when her first- and second-order volitions align. This structural account of the will, we argue, fails to engage with the dynamics of will, resulting in two shortcomings: 1) the problem of directionality, or that Frankfurtian freedom obtains whenever first- and second-order volitions align, regardless of which desire was made to change, and 2) the potential for infinite regress of higher-order desires. We propose that a satisfying account of the genesis of second-order volitions can resolve these issues. To provide this we draw from George Ainslie’s mechanistic account of self-control, which relies on intertemporal bargaining wherein an individual’s self-predictions about future decisions affect the value of her current choices. We suggest that second-order volitions emerge from precisely this sort of process, and that a Frankfurt-Ainslie account of free will avoids the objections previously raised.

Democratic Egalitarianism and the Indifference Objection

Eric Rowse, University of Missouri

A promising version of relational egalitarianism (also known as social egalitarianism) is Elizabeth Anderson’s theory of democratic egalitarianism. It asks us to socially guarantee the democratic capabilities. These are the capabilities that people require to avoid oppressive relationships and stand as equals in a democratic society. Once we socially guarantee the democratic capabilities, Anderson argues, we fulfill all of our egalitarian duties. I examine the indifference objection, which says that, even after everyone has the democratic capabilities, important egalitarian concerns remain. In particular, democratic egalitarianism ignores the distribution of burdens upon people who have, and are not at risk of losing, the democratic capabilities. After assessing the main responses to the indifference objection, I argue that the indifference objection succeeds. It follows that, contrary to Anderson, we likely have egalitarian duties that go beyond realizing democratic egalitarianism.

Fear and Resentment

Grant Rozeboom, St. Norbert College

Resentment is a characteristically defensive response to being wronged, which means that it is a response to a perceived threat. If our moral standing as persons is as secure as many say it is, and if resentment is an appropriate response to being wronged, then it is puzzling that resentment is a defensive attitude. What are we defensive about, if our moral standing is in no way threatened by the wrongs we suffer? I canvass four potential answers to this question and argue that none of them succeed. Two of them do not adequately explain how being wronged poses a general threat, and the other two work only if our moral standing is a part of what is threatened by wrongdoing, i.e., only if it is less secure than it is supposed to be.

Ideological Innocence

Daniel Rubio, Rutgers University

Ontology is the things a theory's quantifiers must range over if it is true. Ideology is the primitive concepts that must be used to state the theory. This splits the theoretical virtue of parsimony into two kinds: ontological and ideological parsimony. My goal is to give a rule for when additional ideology does not count against parsimony. I propose the expressive power innocence criterion: if the ideology of theory one is expressively equivalent to that of theory two, then neither is ideologically simpler than the other. I offer two arguments for this. First: the argument from accuracy, showing that plausible rival criteria for ideological parsimony divide material equivalents, and thus cannot be an account of a theoretical virtue. Second: the argument from cases, showing that the criterion coheres with our intuitions in cases. Finally, I repel an objection raised by Nelson Goodman against attempts to reckon simplicity by expressive power.

Conditional Excluded Middle in Informational Semantics

Paolo Santorio, University of California, San Diego

All semantics for indicative conditionals struggle with a logical tension, which is inherited from the classical Stalnaker/Lewis debate on counterfactuals. On the one hand, conditionals (including epistemic conditionals and counterfactuals) seem to satisfy a principle of Conditional Excluded Middle; on the other, conditionals of the form “If A, not C” seem incompatible with the corresponding conditionals of the form “If A, might C.” Unfortunately, these requirements are jointly unsatisfiable on standard notions of consequence. Hence classical accounts reject one of the two principles. I show that a different solution is available: the tension can be solved by developing a non-truth-conditional semantics for conditionals (which I call “path semantics”), which is broadly in the dynamic/expressivist family. Path semantics generates a nonclassical notion of consequence that validates both principles. Given space constraints, I focus on epistemic conditionals, but the account can in principle be extended to counterfactuals.

Why the Strawsonian Reactive Attitudes Do Not Constitute An Expressive Theory of Moral Responsibility

Nicholas Sars, Tulane University of New Orleans

Peter Strawson’s “Freedom and Resentment” is among the most influential works of the last century, with references to Strawsonian reactive attitudes ubiquitous in the contemporary moral responsibility literature. In this paper, I challenge the received understanding of Strawson’s examination of our moral practices. By clarifying the overlooked analogical structure of Strawson’s argument, I show that Strawson cannot intend the attitudes upon which he focuses to collectively constitute a theory of moral responsibility. Although contrary to the dominant reading of Strawson in the literature, my revised understanding promises to both provide a more faithful representation of the text and address some prominent challenges raised by commentators.

Moorean Arguments: Uses and Abuses

Matthew Scarfone, McGill University

The Moorean Argument against moral error theory goes as follows: if moral error theory is true, then it is false that torturing people for fun is wrong; but it’s true that torturing people for fun is wrong; so, the moral error theory is false. While a key premise (the Moorean Claim) in the argument enjoys presumptive justification, in the face of what we can call a “Full-Fledged Error Theory” that Moorean Claim does not have dialogical justification. So some other argument justifying the Moorean Claim must be forthcoming. In the meanwhile, the Moorean Argument against the moral error theory fails.

Honor, Worth, and Justified Revenge in Aristotle

Krisanna Scheiter, Union College

Aristotle claims that there are times the virtuous person ought to seek revenge. Many assume he is unreflectively adopting ancient Athenian values in which honor is revered and avenging oneself is the mark of a great character. But while a successful act of revenge may restore the honor of the avenger, establishing her place within society, and even bring the offender to justice, these are not the principle aims of revenge. I argue that the aim of revenge, according to Aristotle, is to empower the avenger, proving that she has the worth the offender implies she lacks. Even if no one else were around to witness the act of revenge (and therefore having no effect on her reputation or status), the aim of revenge would still be achieved. The virtuous person will generally ignore slights, but there may be times when even she needs to seek revenge.

An Expertise Defense via Armchair Physics

Samuel Schindler, Anna Drozdzowicz, and Pierre Saint-Germier, University of Aarhus

Experimental philosophers have argued that the gap between the intuitions of philosophers and the folk is reason for concern. In this paper, we seek to strengthen the expertise defense according to which the folk are not suited to make the appropriate judgments in philosophical thought experiments. We argue that the analogy underlying the expertise defense is best drawn between judgements in thought experiments in philosophy and physics: the judgements are relevantly similar. But since a gap between folk and expert intuitions concerning judgements in thought experiments in physics is neither surprising nor problematic, the gap highlighted by experimental philosophers should then not be either.

The Ability Ought Implies

Ben Schwan, University of Wisconsin–Madison

That an agent ought to f only if she is able to f appears to be relatively uncontroversial. But the truth of an ability ascription depends on an almost always implicit characterization of the relevant possibility space. Given this, we should wonder: exactly what ability does ought imply? In this paper, I argue that the normatively relevant sense of ability—the sense that ought implies—tracks the reach and limits of intentional control. And, since agents lack control over (at least much of) what they believe and desire, and since what an agent believes and desires constrains what it is possible for her to do, the normatively relevant sense of ability must be both epistemically and motivationally restricted. If this is right—if this is the ability ought implies—then an agent’s obligations are likewise constrained by her both epistemic and motivational lot.

Two Feelings in the Beautiful: Kant on the Structure of Judgments of Beauty

Janum Sethi, University of Michigan

I propose a solution to the infamous puzzle that lies at the heart of Kant's Critique of Judgment. The puzzle arises because Kant asserts two claims that seem to be in conflict: 1. (F>J) A judgment of beauty is aesthetic, i.e., grounded in feeling. 2. (J>F) A judgment of beauty could not be based on and must ground the feeling of pleasure in the beautiful. I argue that the two theses are consistent. Interpreters have overlooked the fact that Kant distinguishes two feelings—a feeling of the harmony of the cognitive faculties that is the ground of judgments of beauty (F1>J), and the feeling of pleasure that is its consequence (J>F2). Distinguishing these two feelings also resolves another long-standing problem for Kant's “deduction” of judgments of beauty: Kant can maintain that the harmony of the faculties is a condition of judgment in general without implying, absurdly, that all judgments are pleasurable.

Institutional Refusal to Assist in Dying: The Positive Case

Phil Shadd, Independent Scholar

While it is a settled fact that individual doctors who object to medical assistance in dying (MAiD) have a right of refusal based in conscience, much less settled is whether institutions have a right of refusal and whether institutions might have a right of refusal not based in conscience. In this paper I lay out the positive case for institutional exemption from MAiD. The basic argument is that institutions such as hospitals have a natural right of self-governance which, in the case of hospitals, includes the right to decide if they will offer MAiD. It is a mistake to frame the MAiD debate simply in terms of conscience-based opposition or full participation. There are many non-religious reasons—such as institutional capacity and protecting access to palliative care—for which a hospital can legitimately exercise its right of refusal.

A Seat at the Lab Bench for Non-Epistemic Values: A Critique of the Ideal of Value-Free Science

Ahmed Siddiqi, University of Houston

While it seems that many philosophers of science have agreed non-epistemic (e.g., moral, political) values play a pre-scientific role (i.e., deciding what study to fund), more controversial is the belief that non-epistemic values play a role within scientific reasoning. Heather Douglas believes that so long as studies contain decision points where a consequence of error involves social or political consequences, non-epistemic values will inevitably play a role at those decision points (Douglas 2000, 2009). Gregor Betz has recently written an article defending the ideal of value-free science, arguing that Douglas’ worries of arbitrary values infiltrating their way into scientific reasoning can be defused by scientists’ careful articulation of findings and by making uncertainties explicit (Betz 2013). In this paper, I will argue that Betz does not successfully defuse Douglas’ concerns because even the process of careful articulation of findings and uncertainties is subject to the influence of non-epistemic values.

Semantics with Assignment Variables

Alex Silk, University of Birmingham

This paper develops a new framework for compositional semantics, and begins illustrating its fruitfulness by applying it to certain core linguistic data. The key move is to introduce variables for assignment functions into the syntax; semantic values are treated systematically in terms of sets of assignments, now included in the model. The framework provides an alternative to traditional ``context-index''-style frameworks descending from Kamp/Kaplan/Lewis/Stalnaker. A principal feature of the account is that it systematizes various seemingly disparate linguistic “shifting” phenomena, such as with quantifiers, intensionality, and context-sensitivity under modals/attitude verbs. The treatment of the syntax/semantics provides an elegant standardization of quantification across domains (individuals/worlds/assignments), via a generalized binder-index resulting from type-driven movement. The account affords a unified analysis of the context-sensitivity of pronouns, epistemic modals, etc., in the spirit of contextualism, while compositionally deriving certain distinctive shifting/binding phenomena and providing a framework for theorizing about differences in tendencies for local/global readings.

Vagueness and Exclusion

Kenneth Silver, University of Southern California

One worry often raised about the distinct existence of ordinary objects concerns their efficacy: what do ordinary objects do that is not already fully explained by the atoms and that make them up? If ordinary objects cannot be shown to have distinct efficacy, there will be no reason to posit their existence. This is the so-called “causal exclusion problem” for ordinary objects. In response, I show how the problem can be solved if we take macroscopic objects to be metaphysically vague. The causally relevant properties of some macroscopic object cannot be determined by the set of molecules that compose it if there is no determinate set of molecules composing it. After filling out this solution, I consider and respond to the objection that prominent views of metaphysical vagueness cannot appeal to this solution.

How Ideologies Get Their Functions

David Smith, University of New England

In this paper, I focus on the functional conception of ideology, according to which ideologies are beliefs (and their associated practices) that have the function of bringing about or sustaining the oppression of certain social groups by others. But this seemingly transparent definition conceals a major ambiguity, because there exist two views of what functions are. According to the causal account, the function of a thing is what it does, and according to the teleological approach the function of a thing is what it is for. It follows that there are really two functional conceptions of ideology: a causal one and a teleological one. Drawing on work in the philosophy of biology, I show that it is possible to give a teleological analysis of the function of ideology without defaulting to psychologism.

Love in the Time of Alzheimer’s: Cognitive Degeneration, Internality, and Love

Jared Smith, University of California, Riverside

In “Holding on to the Reasons of the Heart,” Andrew Franklin-Hall and Agnieszka Jaworska propose a caring-based account of love, arguing that some with Alzheimer’s retain the capacity to love. Yet, their proposal is mistakenly premised on the reliability of the “emotional memories” of these agents, since many Alzheimer’s patients display interests that are markedly disconnected from those they previously held. Drawing on two counterexamples, as well as the notion of internality at the heart of caring, I argue this caring-based account of love must have an internality requirement. Yet, the “love-like” attitudes of these agents do not necessarily speak for them in the way needed to satisfy the internality requirement; they do not track an agent’s interests over time, and cannot unite her temporally extended identity. While it is possible for some with Alzheimer’s to meet these requirements, the bar is set much higher than Franklin-Hall and Jaworska suppose.

Gender and Theory-Acceptance in Ethics

Nicholas Smyth, Simon Fraser University

In recent years, non-consequentialist theories have been subjected to a variety of debunking arguments. It has been alleged, for example, that “characteristically deontological” judgments arise from emotional or intuitive psychological mechanisms which we have antecedent reason to distrust (Greene 2016, Singer 2017, 1). In this paper, I explore a structurally identical (if much more provocative) argument, which begins with a simple question: why are most utilitarians male? As it turns out, all moral theorists have reason to worry about heavy demographic skew, since it suggests that theory-acceptance is heavily influenced by an epistemologically irrelevant factor. Utilitarians, however, encounter particular difficulties in explaining the extraordinary demographic skew. After outlining the precise nature of the debunking challenge, I go on to suggest that the most empirically plausible explanation for the correlation is very unlikely to be of any comfort to the utilitarian.

Nietzsche on the Origin of Obligation

Avery Snelson, University of California, Riverside

In the Genealogy’s second essay, Nietzsche ostensibly argues that our concept of obligation originated as a debt within creditor-debtor relationships (Leiter 2015; Reginster 2011, forthcoming). According to this contractualist reading, creditor-debtor relationships gave rise to the development of conscience, and along with it our sense and concept of obligation. In this paper, I argue that the contractualist reading is both internally inconsistent and textually deficient. It is inconsistent because creditor-debtor relationships are arrangements created by the making of promises, or the act of communicating an intention to undertake an obligation, and therefore require the debtor to already have a concept of obligation. Moreover, it is textually inadequate because Nietzsche argues that the conscience originated as a memory of rules within the morality of custom. I then offer analysis of the different notions of voluntary and involuntary obligation operative within each domain, arguing that the former depends on the latter.

Confessing to Superfluous Premises

Roy Sorensen, Washington University in St. Louis

Can an argument truthfully confess to having a premise superfluous to its soundness? “An argument has a premise superfluous to its soundness if and only if deleting some premise would leave a sound argument. This argument would be sound if a premise were deleted. This argument has a premise superfluous to its soundness.” The syllogism is valid, but its second premise is false. Suppose we try to make the argument sound by adding an irrelevant premise. Surprisingly, this does not help. Deleting the sacrificial premise would just return us to the original unsound syllogism. Adding further irrelevant premises just delays the result. Paradoxically, we could add any number of irrelevant premises and never make the argument soundly acknowledge the presence of superfluous premises. This verdict is defended against a diagnosis that tries to satisfy the deletion test with an argument that does not involve self-reference.

Legitimacy, Consent, and Initial Appropriation

Jesse Spafford, The Graduate Center, CUNY

In defending the legitimacy of private property, libertarian philosophers tend to endorse unilateralist theories of initial appropriation whereby individuals are able to acquire private property rights over some bit of unowned land or natural resource without the consent of others. At the same time, many libertarians have defended a consent theory of legitimacy whereby a state is taken to be legitimate if and only if its purported subjects have consented to be governed by that state. This paper argues that such consent theories of legitimacy are incompatible with unilateralist theories of initial appropriation, as the property rights that the latter insist can be established without consent amount to a form of legitimacy—and, thus, require consent according to a consent theory of legitimacy. Finally, the paper offers a response to the influential unilateralist argument that consent cannot be a necessary condition of initial appropriation.

“The Familiar Face of a Word”: Wittgenstein and Benjamin on the Experience of Meaning

Alexander Stern, Goethe-Universität Frankfurt

In what is now called The Philosophy of Psychology, Wittgenstein claims the importance of the concept of aspect-seeing “lies in the connection between the concepts of seeing an aspect and of experiencing the meaning of a word” (PPF §261). Just as we can imagine someone who cannot see different aspects in the same image—for example, the duck-rabbit—we can imagine people who use language but do not experience the meaning of a word. In this paper I explicate the importance of this “meaning-blindness” and its relation to aspect-seeing. I then argue—drawing on a similar thought experiment in Walter Benjamin’s early philosophy of language—that meaning-blindness is actually a fatal impediment for language-use. The upshot of my analysis is that the aesthetic experience of meaning, regularly marginalized in the philosophy of language, must be understood as fundamental to meaning and language-use.

The Evolutionary Independence of Species in Light of Population Genomics

Beckett Sterner, Arizona State University

Evolutionary conceptions of species place distinctive weight on the idea that each species has its own dynamic independence as a unit of evolution. However, the various notions that a species has its own historical fate, tendency, or role have resisted careful analysis and risk remaining at the level of metaphor. This gap in our theoretical understanding is growing more important as population genomics continues to discover more commonly recognized species that engage in hybridization. How can species be defined in terms of their independent evolutionary identities if their genomes are dynamically coupled through lateral exchange? I argue that the evolutionary independence of species is relative to the slowest timescale at which reproductive cohesion exists among individuals in the species. This approach provides a novel methodological reduction of the macro-evolutionary concept of evolutionary independence to the micro-evolutionary processes constituting population lineages.

Undermining the Principle of Alternative Possibilities

Rick Stoody, Gonzaga University

According to the Principle of Alternative Possibilities (PAP), necessarily, a person is morally responsible for what she has done only if she could have done otherwise. Beginning with Harry Frankfurt over forty years ago, a number of counterexamples (“Frankfurt examples”) have been offered. The traditional strategy for attacking PAP is to attack it directly by attempting to provide a modal counterexample to the principle. In this paper, I take a different approach, attacking the principle indirectly. As I see it, the fundamental intuition behind PAP is the following: PAP(CC): When an agent is non-derivatively morally responsible for an action, he is so partly in virtue of having been able to have done otherwise. I argue that so-called “complete blockage” Frankfurt examples show that PAP(CC) is false, and thus, absent other compelling arguments in favor of PAP, there is little reason to accept PAP.

Interpersonalism as a Model of Human Agency

Daniel Story, University of California, Santa Barbara

I introduce two competing models of human agency, Individualism and Interpersonalism, and argue that we should prefer the latter. According to Individualism, humans are entirely discrete actors whose practical perspectives are “sealed off” from one another; humans can only deliberate and intend for themselves. According to Interpersonalism, humans can simulate the deliberations of others, form intentions whose content includes others’ actions, and pursue irreducibly shared ends. I argue that Interpersonalism is preferable to Individualism for many reasons. I focus on the fact that Interpersonalism can make sense of how agents without a theory of mind can come to act collectively, an important desideratum because it has been suggested that children develop their theory of mind in this way.

Knowledge Attributions, Practicability, and Norms of Assertion

Gregory Stoutenburg, York College of Pennsylvania

A false knowledge attribution is often conversationally appropriate because it can be used to achieve the speaker’s goal. That a false but practicable claim is sometimes conversationally appropriate lends support to a new norm of assertion. I argue for these two important theses as part of a comprehensive defense of a skeptical view of knowledge attributions. The main obstacle to this view is how to make sense of the practicability of false knowledge attributions when such claims rely on epistemic facts whose existence is contested by the epistemological skeptic.

On Socrates’ Project of Philosophical Conversion

Jacob Stump, University of Toronto

In the Apology, Socrates describes a project of using philosophical argument to try to lead his interlocutors to value wisdom the most. How, if at all, does Plato think that Socrates can succeed at that? It is widely held that Plato thinks Socrates cannot succeed at it—indeed, that his project of philosophical conversion is misguided from the start, insofar as philosophical argument is ineffective as a tool of value transformation. I oppose this standard view, and in its place I offer a novel account of Socrates’ strategy for changing what his interlocutors value the most. I argue that Plato depicts Socrates going about converting his interlocutors in two stages. In the first, Socrates exploits one of his interlocutor’s preexisting concerns to motivate him to pursue wisdom just for its instrumental value. In the second, this instrumental pursuit causes a transformation after which wisdom is valued more than anything else.

Kant's Cogito Argument as the Principle of Transcendental Philosophy

Meghant Sudan, Colby College

Kant’s Transcendental Deduction relies on the concept of self-consciousness formulated with the expression “I think.” Put thus, its Cartesian heritage is prima facie obvious, but just as hard to understand or defend on closer inspection. This paper discerns a more cautious Cartesian debt in Kant’s argument from the “I think” to the a priori principle of apperception by situating this argument against his efforts to drive a wedge between empirical and rational psychology (and not only a denial of the latter). Kant thereby tries to articulate the formal unity of consciousness, which is nascent in the Cartesian cogito argument, but on nonconceptual grounds of an intuition in general. This lends the intellect a distinctly Kantian stamp of finitude, but also deepens the Cartesian insight by discovering a deeper unity of subjectivity as a system. I compare my interpretation with Dieter Henrich’s and Beatrice Longuenesse’s very powerful readings of this juncture.

An Intellectualist Conception of Human Freedom

Michael Szlachta, University of Toronto

Godfrey of Fontaines, a master at the University of Paris in a period when free will was a topic of much debate, argues that a human being is free because she has the ability to have “perfect cognition,” that is, the ability to grasp the nature of her end, the means to that end, and the relationship between the two. I argue, first, that this conception of freedom is compatible with Godfrey’s “intellectualist” thesis that volition is caused by the activity of our cognitive powers, and second, that there is good reason to hold that freedom and perfect cognition are related. I also present and address two challenges to conceiving of human freedom as Godfrey does. First, it does not seem that having perfect cognition is sufficient for exercising human freedom, and second, having perfect cognition does not even seem to be necessary for exercising human freedom.

Princess Diana’s Dress, Mink Coats, and Nature: Reasons for Valuing as Ends

Levi Tenen, Indiana University Bloomington

A number of philosophers argue that objects can have extrinsic final value, or be valuable as ends for their relation to things outside of them. I agree with these writers but argue that they have not shown how extrinsic features can render an object finally valuable. Christine Korsgaard, as well as Wlodek Rabinowicz and Toni Ronnow-Rasmussen, isolate extrinsic features that seem only to give us reason to value objects for the sake of something else, or non-finally. I suggest a solution: when we value one entity as an end, we might come to conceive of another object as being related to it in a non-instrumental, non-constitutive way—we might conceive of the object as resembling or as being historically related to the thing we already value as an end. In such cases, we might then have good reason to value the object for its own sake.

Save (some of) the Children

Travis Timmerman, Seton Hall University

In “Save the Children!,” Arturs Logins responds to Timmerman’s argument that, in certain cases, it is morally permissible to not prevent something bad from happening, even when one can do so without sacrificing something of comparable moral importance. Logins’s response is thought-provoking, though as I will argue, his critiques miss their mark. I rebut each of the three responses offered by Logins. However, my primary focus will be on Logins’s second criticism because it rests on an unfortunately common misunderstanding of Singer’s argument in “Famine, Affluence, and Morality.” My response, then, is important not only because it salvages Timmerman’s positive argument, but also because it identifies, and corrects, this misunderstanding. I will argue that two of Logins’s criticisms fail, in part, because they rest on a common misunderstanding of Singer’s argument. One of Logins’s criticisms fail because it appeals to a demonstrably false version of the Intervention Test.

Out of Mind, Out of Sight

Emine Tuna, Brown University

In this paper, I have two goals. First, I will criticize some of the recent positions on imaginative resistance (Shen-yi Liao, Nina Strohminger, and Chandra Sekhar Sripada (2014) and Shen-yi Liao (forthcoming)), which I believe are contributing to the trend of straying away from the original promise of imaginative resistance research. But also, I want to acknowledge some of their strengths as well, particularly a compelling diagnosis they make (i.e., genre makes a difference). Second, I am going to provide my own interpretation of the phenomenon and show that my interpretation also provides the theoretical framework to account for this compelling diagnosis. I will argue that the reason why we find it almost impossible to engage in the imaginative activity prompted by a fictional work is grounded not only in moral disapprobation it creates but also in the emotion of disgust that mingles with and amplifies the disapprobation.

Against Duty to Love

Sungwoo Um, Duke University

In “The Idea of a Duty to Love (2006),” S. Matthew Liao argues for the idea of a duty to love, claiming that parents have a duty to love their children. In this paper, I argue that love as he understands cannot be required as a duty. First, I briefly introduce the conception of love that I will focus on in this discussion. Next, I argue that Liao fails to defend the idea of a duty to love against the commandability objection and the motivation objection. Then, I argue that it is hard to explain why one may not be blameworthy for the violation of a duty to love. I conclude that the idea of a duty to love is not tenable.

Thin Constitutivism: A Realist Explanation of Moral Motivation

Benjamin Wald, University of Toronto

Metaethical constitutivism offers an account of normativity that purports to preserve much of the objectivity of normative realism while also accounting for the motivational role of normative judgments. I show that metaethical constitutivism actually consists of two distinct elements. The explanation of moral motivation relies on there being a constitutive aim of action. The claim that normative reasons are ontologically based in this aim of action is a further step, and one I will argue brings with it significant problems. The combination of these two claims is what I will call “thick constitutivism.” Defenders of thick constitutivism have overlooked a more modest and, I will argue, more plausible view which I call “thin constitutivism,” which takes the first step of explaining normative motivation in terms of a constitutive aim of action, but denies the further claim that normative reasons are derived from this aim.

Kant and the Science of Empirical Schemata

Jessica Williams, University of South Florida

Kant’s account of empirical schemata raises two central questions: (1) why do empirical concepts need schemata and (2) how do empirical concepts differ from their schemata? The prevailing answer to these questions in the scholarly literature is that concepts are discursive rules while schemata are perceptual rules, which serve to explain how we can apprehend objects in intuition as instances of general kinds. On my view, empirical schemata are not perceptual rules in a psychological sense (as on other views), but in the sense that they are publicly accessible rules for the classification of objects in terms of directly intuitable features like shape and number of parts. They thus play an important role in forming and systematizing empirical concepts within the context of a science of nature.

Cognitive Control and Implicit Attitudes

Jessica Wright, University of Toronto

Some philosophers writing on implicit attitudes claim that implicit attitudes are outside the realm of rational evaluation (“arational”) because they are uncontrollable, and that they are uncontrollable because they are associations. I put pressure on both of these claims. First I show that associations are uncontrollable only if we assume a dual-systems model, and this kind of model is psychologically indefensible. On the psychologically defensible dual-process model, associations are controllable. Next, I consider the claim that the kind of control (direct as opposed to indirect) determines rational evaluability. I argue that even though we do not exercise direct control over our associations, we are yet rationally evaluable for having them. This has the result that we can call particular associations “rational” or “irrational.” I suggest that what matters for rational evaluability is whether i) our attitude is controlled (in any sense), and ii) whether our attitude-forming process is epistemically virtuous.

The Duty of Veracity

Ava Thomas Wright, University of Georgia

In a late essay, “On a Supposed Right to Lie From Philanthropy” (SR), Immanuel Kant asserts a duty of truthfulness in social testimony: “Truthfulness in statements that one cannot avoid is a human being's duty to everyone. . . . [By violating this duty] I bring it about, as far as I can, that statements (declarations) in general are not believed . . . and this is a wrong inflicted upon humanity generally. Thus a lie...makes the source of right unusable.” The aim of this poster session is to determine whether Kant's theory of justice can support the duty of veracity in SR, and if it can, then to determine the nature and scope of this duty. My intent is to extract a Kantian defense of veracity that is social and epistemic. The scope of the duty of veracity will therefore be limited primarily to social testimonial statements of expert knowledge, such as scientific knowledge.

Causes, Cycles, Equilibria

Tomasz Wysocki, University of Pittsburgh

Superman’s laser rays meet Faora’s halfway. If he closes his eyes, the rays from her eyes will hit him, and Superman will be no more. The same holds for her. Thus, Superman’s beaming causes Faora’s beaming, but her beaming causes his as well, and the spectacle goes on forever. The fight of these two constitutes a counterexample to Hitchcock’s account of causation (2001), and my aim is to show how. First, I describe his account; on this account, causal claims can be determined from acyclic graphs describing a situation under consideration. Then, I show how the case above poses problems for the account. Finally, I present a macroeconomic model that is more naturally analyzed with a cyclic graph and propose how to deal with causal claims in situations described by such graphs. Although I focus on Hitchcock’s account, my argument targets any structural equations framework that doesn’t allow cycles.

Three Problems for Art-Ontological Descriptivism

Michel-Antoine Xhignesse, University of British Columbia

Concept-driven approaches to art’s ontology maintain that the proper method of art-ontological investigations is a description of the content of our thoughts about art. As a result, concept-driven approaches argue that we cannot be substantially mistaken about the content of our concept of “art.” I introduce a trio of problems for this view, all of which centre on the fact that an increasingly large body of anthropological, art historical, and social psychological evidence and philosophical argument points to the conclusion that the way we think about “art” is neither historically stable, nor geographically universal: the problems of conceptual instability, conceptual imperialism, and conceptual inclination. Taken together, these problems show that our intuitions reflect cultural and social conventions, not bare ontology.

The American Philosophical Association
University of Delaware
31 Amstel Avenue, Newark, DE 19716
Phone: 302.831.1112 | Fax: 302.831.8690
Email: info@apaonline.org