All (37)- Philosophy - Computer Science - Psychology and Biology
CV
Google Scholar - Academia.edu - PhilPapers
In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on issues related to interpretability, examining evidence from causal intervention methods about the nature of LLMs’ internal representations and computations. We also discuss the implications of multimodal and modular extensions of LLMs, recent debates about whether such systems may meet minimal criteria for consciousness, and concerns about secrecy and reproducibility in LLM research. Finally, we discuss whether LLM-like systems may be relevant to modeling aspects of human cognition, if their architectural characteristics and learning scenario are adequately constrained.
Large language models like GPT-4 have achieved remarkable proficiency in a broad spectrum of language-based tasks, some of which are traditionally associated with hallmarks of human intelligence. This has prompted ongoing disagreements about the extent to which we can meaningfully ascribe any kind of linguistic or cognitive competence to language models. Such questions have deep philosophical roots, echoing longstanding debates about the status of artificial neural networks as cognitive models. This article -- the first part of two companion papers -- serves both as a primer on language models for philosophers, and as an opinionated survey of their significance in relation to classic debates in the philosophy cognitive science, artificial intelligence, and linguistics. We cover topics such as compositionality, language acquisition, semantic competence, grounding, world models, and the transmission of cultural knowledge. We argue that the success of language models challenges several long-held assumptions about artificial neural networks. However, we also highlight the need for further empirical investigation to better understand their internal mechanisms. This sets the stage for the companion paper (Part II), which turns to novel empirical methods for probing the inner workings of language models, and new philosophical questions prompted by their latest developments.
2023, Oxford University Press (130,000 words)
Show/hide abstract
This book provides a framework for thinking about foundational philosophical questions surrounding the use of deep artificial neural networks (“deep learning”) to achieve artificial intelligence. Specifically, it links recent breakthroughs in deep learning to classical empiricist philosophy of mind. In recent assessments of deep learning’s current capabilities and future potential, prominent scientists have cited historical figures from the perennial philosophical debate between nativism and empiricism, which primarily concerns the origins of abstract knowledge. These empiricists were generally faculty psychologists; that is, they argued that the extraction of abstract knowledge from perceptual experience involves the active engagement of general psychological faculties—such as perception, memory, imagination, attention, and empathy. This book explains how recent headline-grabbing deep learning achievements were enabled by adding functionality to these networks that model forms of processing attributed to these faculties by philosophers such as Aristotle, Ibn Sina (Avicenna), John Locke, David Hume, William James, and Sophie de Grouchy. It illustrates the utility of this interdisciplinary connection by showing how it can provide benefits to both philosophy and computer science: computer scientists can continue to mine the history of philosophy for ideas and aspirational targets to hit on the way to building more robustly rational artificial agents, and philosophers can see how some of the historical empiricists’ most ambitious speculations can now be realized in specific computational systems.
2023, Mind Design III, Haugeland, J., Craver, C., and Klein, C. Eds (9000 words)
Show/hide abstract
In this chapter, I explore the question of whether deep neural networks can model the human faculty of abstraction. Current opinion on this question exhibits a stark and puzzling divide. That DNNs are capable of some sort of abstraction—indeed, that such ability is their distinguishing strength—is often treated by DNN researchers as so obvious as to barely require mention. At the same time, skeptics take it as equally obvious that a fatal weakness of DNNs lies in their inability to learn and manipulate human abstractions. To elucidate dispute, I adopt a strategy that melds the history of philosophy with recent empirical work. Some of the fears that DNN-based processing is alien or opaque can be alleviated by drawing upon the empiricist theories of abstraction and its role in cognition elaborated by, e.g., Locke, Hume, and Berkeley. These accounts illuminate and contextualize otherwise technical details of recent DNN architectures and show how these advances fit into empiricist theories of abstraction. But AI can also inform philosophy: I argue that DNN architectures can help resolve longstanding philosophical puzzles and debates regarding empiricist accounts of abstraction and its role in the acquisition of general category representations.
In this essay, I provide a forward-looking naturalized theory of mental content designed to accommodate predictive processing approaches to the mind, which are growing in popularity in philosophy and cognitive science. The view is introduced by relating it one of the most popular backward-looking teleosemantic theories of mental content, Fred Dretske’s informational teleosemantics. It is argued that such backward-looking views (which locate the grounds of mental content in the agent’s evolutionary or learning history) face a persistent tension between ascribing determinate contents and allowing for the possibility of misrepresentation. A way to address this tension is proposed by grounding content attributions in the agent’s own ability to detect when it has represented the world incorrectly through the assessment of prediction errors—which in turn allows the organism to more successfully represent those contents in the future. This opens up space for misrepresentation, but that space is constrained by the forward-directed epistemic capacities that the agent uses to evaluate and shape its own representational strategies. The payoff of the theory is illustrated by showing how it can be applied to interpretive disagreements over content ascriptions amongst scientists in comparative psychology and ethology. This theory thus both provides a framework in which to make content attributions to representations posited by an exciting new family of predictive models of cognition, and in so doing addresses persistent tensions with the previous generation of naturalized theories of content.
2020, The British Journal for the Philosophy of Science (10000 words)
Show/hide abstract
The last five years have seen a series of remarkable achievements in deep-neural-network-based Artificial Intelligence (AI) research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are i) data-hungry, ii) brittle, and iii) inscrutable black boxes that merely iv) reward-hack rather than learn real solutions to problems. This paper rebuts these criticisms by exposing comparative bias within them, in the process extracting some more general lessons that may also be useful for future debates.
Deep neural networks are currently the most widespread and successful technology in artificial intelligence. However, these systems exhibit bewildering new vulnerabilities: most notably a susceptibility to adversarial examples. Here, I review recent empirical research on adversarial examples that suggests that deep neural networks may be detecting in them features that are predictively useful, though inscrutable to humans. To understand the implications of this research, we should contend with some older philosophical puzzles about scientific reasoning, helping us to determine whether these features are reliable targets of scientific investigation or just the distinctive processing artefacts of deep neural networks.
2019, Philosophy Compass (8000 words)
Show/hide abstract
Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation as to why they work so well. This article provides an introduction to these networks as well as an opinionated guidebook on the philosophical significance of their structure and achievements. It argues that deep learning neural networks differ importantly in their structure and mathematical properties from the shallower neural networks that were the subject of so much philosophical reflection in the 1980s and 1990s. The article then explores several different explanations for their success and ends by proposing three areas of inquiry that would benefit from future engagement by philosophers of mind and science.
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call "transformational abstraction". Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to "nuisance variation" in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain.
2018, Oxford Handbook of 4E Cognition (8000 words)
Show/hide abstract
In this article, I begin with basic associative learning capacities to see what must be added to achieve the behavioral flexibility found in intuitive judgment. First, I will sketch a new model of intuitive inferences which describes how they could be practically rational. This model involves a novel hybrid of internalism and externalism, which I argue is the key to understanding the role of psychological explanation in comparative psychology and ethology. A major advantage of the build-up approach is that if we begin with learning mechanisms that we know animals possess, and ask what must be minimally added to achieve the increasingly flexible behavior found in the murky zone, our explanations will more fully recognize the rich dependence of these strategies on ecological scaffolding and developmental shaping, which will in turn increase the explanatory power of those models. Finally I will suggest some ways to answer the deeper question which faces much of this literature: how domain-specific, ecological processes could ever achieve the domain-general, validity-preserving transitions characteristic of the classical approach. The answer may be disappointing: that the role of such thought is vanishingly small even in adult humans; and while it is extremely powerful and may explain much of our mastery over the natural world, it need only capture the most symbolically and socially scaffolded human activities—explicit scientific and mathematical reasoning—an ideal our cognition rarely actually achieves
(With James Garson)
2018, Routledge Handbook of the Computational Mind (6000 words)
Show/hide abstract
This is a primarily historical entry that traces the development of connectionism from early insights into neurons and perceptrons, to the height of its popularity in the PDP program, to the current resurgence of neural network research in deep learning. In each case, discoveries are placed in philosophical context.
A surge of empirical research demonstrating flexible cognition in animals and young infants has raised interest in the possibility of rational decision-making in the absence of language. A venerable position, which I here call "Classical Inferentialism", holds that nonlinguistic agents are incapable of rational inferences. Against this position, I defend a model of nonlinguistic inferences that shows how they could be practically rational. This model vindicates the Lockean idea that we can intuitively grasp rational connections between thoughts by developing the Davidsonian idea that practical inferences are at bottom categorization judgments. From this perspective, we can see how similarity-based categorization processes widely studied in human and animal psychology might count as practically rational. The solution involves a novel hybrid of internalism and externalism: intuitive inferences are psychologically rational (in the explanatory sense) given the intensional sensitivity of the similarity assessment to the internal structure of the agent's reasons for acting, but epistemically rational (in the justificatory sense) given an ecological fit between the features matched by that assessment and the structure of the agent's environment. The essay concludes by exploring empirical results that show how nonlinguistic agents can be sensitive to these similarity assessments in a way that grants them control over their opaque judgments.
The Darwinian protolanguage hypothesis is one of the most popular theories of the evolution of human language. According to this hypothesis, language evolved through a three stage process involving general increases in intelligence, the emergence of grammatical structure as a result of sexual selection on protomusical songs, and finally the attachment of meaning to the components of those songs. The strongest evidence for the second stage of this process has been considered to be birdsong, and as a result researchers have investigated the existence of various forms of grammar in the production and comprehension of songs by birds. Here, we argue that mating dances are another relevant source of sexually-selected complexity that has until now been largely overlooked by proponents of Darwinian protolanguage, focusing especially on the dances of long-tailed manakins. We end by sketching several lines of research that should be pursued to determine the relevance of mating dances to the evolution of language.
2017, The Routledge Handbook of Philosophy of Animal Minds (5000 words)
Show/hide abstract
The Standard Practice of comparative psychology presumes that cognitive and associative causes of behavior are mutually exclusive alternatives, and attempts to distinguish them by means of cleverly controlled experiments. I here provide reasons for thinking that this methodology is due for a serious revision, but not the wholesale rejection recommended by many recent commentators. If we reinterpret the methodology as trying to determine the memory system under which a behavior is controlled—accepting that all memory systems, even the distinctively "cognitive" ones, can fruitfully be described by associative models—then this methodology can be largely salvaged, and indeed emerge with a strengthened self-understanding. This revision requires numerous changes of perspective and especially a willingness to cooperate with neuroscience; but if we are up to the task, comparative psychology may continue to enjoy a bright future for many years to come.
Recent studies purported to demonstrate that chimpanzees, monkeys and corvids possess a basic Theory of Mind, the ability to attribute mental states like seeing to others. However, these studies remain controversial because they share a common confound: the conspecific’s line of gaze, which could serve as an associative cue. Here, we show that ravens
Corvus corax take into account the visual access of others, even when they cannot see a conspecific. Specifically, we find that ravens guard their caches against discovery in response to the sounds of conspecifics when a peephole is open but not when it is closed. Our results suggest that ravens can generalize from their own perceptual experience to infer the possibility of being seen. These findings confirm and unite previous work, providing strong evidence that ravens are more than mere behaviour-readers.
2016, British Journal for the Philosophy of Science, 67(4), 1091-1115 (10,300 words)
Show/hide abstract
I here critique the application of the traditional, similarity-based account of natural kinds to debates in psychology. A challenge to such accounts of kindhood—familiar from the study of biological species—is a metaphysical phenomenon that I call ‘transitional gradation’: the systematic progression of slightly modified transitional forms between related candidate kinds. Where such gradation proliferates, it renders the selection of similarity criteria for kinds arbitrary. Reflection on general features of learning—especially on the gradual revision of concepts throughout the acquisition of expertise—shows that even the strongest candidates for similarity-based kinds in psychology exhibit systematic transitional gradation. As a result, philosophers of psychology should abandon discussion of kindhood, or explore non-similarity based accounts.
2015, Synthese 192(12) 3915-3942 (13,572 words)
Show/hide abstract
The functionalist approach to kinds has suffered recently due to its association with law-based approaches to induction and explanation. Philosophers of science increasingly view nomological approaches as inappropriate for the special sciences like psychology and biology, which has led to a surge of interest in approaches to natural kinds that are more obviously compatible with mechanistic and model-based methods, especially homeostatic property cluster theory. But can the functionalist approach to kinds be weaned off its dependency on laws? Dan Weiskopf has recently offered a reboot of the functionalist program by replacing its nomological commitments with a model-based approach more closely derived from practice in psychology. Roughly, Weiskopf holds that the natural kinds of psychology will be the functional properties that feature in many empirically successful cognitive models, and that those properties need not be localized to parts of an underlying mechanism. I here skeptically examine the three modeling practices that Weiskopf thinks introduce such non-localizable properties: fictionalization, reification, and functional abstraction. In each case, I argue that recognizing functional properties introduced by these practices as autonomous kinds comes at clear cost to those explanations’ counterfactual explanatory power. At each step, a tempting functionalist response is parochialism: to hold that the false or omitted counterfactuals fall outside the modeler’s explanatory aims, and so should not be counted against functional kinds. I conclude by noting the dangers this attitude poses to scientific disagreement, inviting functionalists to better articulate how the individuation conditions for functional kinds might outstrip the perspective of a single modeler.
2015, Philosophical Psychology 28(3) 307-336 (13,000 WORDS)
Show/hide abstract
Our prominent definitions of cognition are too vague and lack empirical grounding. They have not kept up with recent developments, and cannot bear the weight placed on them across many different debates. I here articulate and defend a more adequate theory. On this theory, behaviors under the control of cognition tend to display a cluster of characteristic properties, a cluster which tends to be absent from behaviors produced by non-cognitive processes. This cluster is reverse-engineered from the empirical tests that comparative psychologists use to determine whether a behavior was generated by a cognitive or a non-cognitive process. Cognition should be understood as the natural kind of psychological process that non-accidentally exhibits the properties assessed by these tests (as well as others we have not yet discovered). Finally, I review two plausible neural accounts of cognition’s underlying mechanisms—one based in localization of function to particular brain regions and another based in the more recent distributed networks approach to neuroscience—which would explain why these properties non-accidentally cluster. While this notion of cognition may be useful for a number of debates, I here focus on its application to a recent crisis over the distinction between cognition and association in comparative psychology.
2014, Mind & Language 29(5) 566-589 (10,000 WORDS)
Show/hide abstract
Philosophers have worried that research on animal mind-reading faces a “logical problem”: the difficulty of experimentally determining whether animals represent mental states (e.g.
seeing) or merely the observable evidence for those states (e.g.
line-of-gaze). The most impressive attempt to confront this problem has been mounted recently by Robert Lurz (2009, 2011). However, Lurz’ approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants in this debate do not appear to agree on criteria for representation. As such, future debate on this question should either abandon the representational idiom or confront differences in underlying semantics.
For the most part, the Aesthetic Theory of Art—any theory of art claiming that the aesthetic is a descriptively necessary feature of art—has been repudiated, especially in light of what are now considered traditional counterexamples. We argue that the Aesthetic Theory of Art can effectively vitiate such counterexamples by abandoning aesthetic-feature possession by the artwork in favor of aesthetic-concept possession by the artist. This move productively re-frames and re-energizes the debate surrounding the relationship between art and the aesthetic. That is, we claim Aesthetic Theory so re-framed suggests that the aesthetic might have a central and substantial explanatory role to play within both traditional philosophical inquiries as well as recent and more empirical inquiries into the psychological and cognitive aspects of art and its practice. Finally, we discuss the directions this new work might take—by tying art theory to investigations of the distinctive sensorimotor capacities of artists, their specialized aesthetic conceptual schemata, and the ways these distinctive capacities and schemata contribute to the production of artworks.
2013, Biology & Philosophy 28(5) 853-871 (10,000 WORDS)
Show/hide abstract
How should we determine the distribution of psychological traits—such as Theory of Mind, episodic memory, and metacognition—throughout the Animal kingdom? Researchers have long worried about the distorting effects of anthropomorphic bias on this comparative project. A purported corrective against this bias was offered as a cornerstone of comparative psychology by C. Lloyd Morgan in his famous “Canon”. Also dangerous, however, is a distinct bias that loads the deck against animal mentality: our tendency to tie the competence criteria for cognitive capacities to an exaggerated sense of typical human performance. I dub this error “anthropofabulation”, since it combines anthropocentrism with confabulation about our own prowess. Anthropofabulation has long distorted the debate about animal minds, but it is a bias that has been little discussed and against which the Canon provides no protection. Luckily, there is a venerable corrective against anthropofabulation: a principle offered long ago by David Hume, which I call “Hume’s Dictum”. In this paper, I argue that Hume’s Dictum deserves a privileged place next to Morgan’s Canon in the methodology of comparative psychology, illustrating my point through a discussion of the debate over Theory of Mind in nonhuman animals.
(Cartoon poscript.)
2013, Biology & Philosophy 28(1) 145-152 (4,000 WORDS)
Show/hide abstract
In
World without Weight, Povinelli and colleagues ask whether chimpanzees can understand the concept of weight, answering with a resounding “no”. They justify their answer by appeal to over thirty previously unpublished experiments. I here evaluate in detail Povinelli’s arguments against his targets, questioning the assumption that such comparative questions will be resolved with an unequivocal “yes” or “no”.
2013, Communications in Computer and Information Science 272 258-275 (9,000 WORDS)
Show/hide abstract
Ontology evaluation poses a number of diffcult challenges requiring different evaluation methodologies, particularly for a "dynamic ontology" generated by a combination of automatic and semi-automatic methods. We review evaluation methods that focus solely on syntactic (formal) correctness, on the preservation of semantic structure, or on pragmatic utility. We propose two novel methods for dynamic ontology evaluation and describe the use of these methods for evaluating the different taxonomic representations that are generated at different times or with different amounts of expert feedback. These methods are then applied to the Indiana Philosophy Ontology (InPhO), and used to guide the ontology enrichment process.
2013, Evolution 67(3) 918-919 (1,000 WORDS)
Show/hide abstract
A review of Daniel Povinelli's
World without Weight.
2012, Essays in Philosophy 13(2) (4,000 WORDS)
Show/hide abstract
Jennifer McMahon argues that we understand art not by explicitly interpreting “raw percepts,” but rather by engaging with our implicit tendencies to interpret complex stimuli in terms of culturally-engrained preconceptions and narratives. These attributions of order require a shared conceptual and cultural background, and thus one might worry that in denying access to raw percepts, the view dulls art’s critical edge. Against this worry, McMahon argues that art can continue to create and innovate by inviting us to critically reflect upon the very preconceptions on which our engagement with it necessarily depends. In this response, I place these attributions of order in historical and empirical context. In addition, I discuss a lingering, related mystery — the possibility of the occasionally punctuated character of artistic evolution, in which prevailing aesthetic conventions are replaced with almost entirely new ones. I suggest that such radical breaks with the past are possible even given the concept-ladeness of perception, but are only likely to succeed when they tap into a culturally-invariant bedrock of more basic attributions of order.
2012, Philosophical Psychology 25(3) 457-461 (2,000 WORDS)
Show/hide abstract
In The Ego Tunnel, Thomas Metzinger offers us an original and informed overview of the science and philosophy of consciousness. In contrast to his earlier books, Metzinger's discussion is aimed at not professional philosophers or scientists, but rather the wider public. The book's most distinctive contribution is Metzinger's visionary analysis of the future of consciousness research. Updating themes commonly associated with the Churchlands, Metzinger warns that this future research will grant us new capabilities to understand and manipulate our own conscious states, and we had better get ready.
2011, International Journal for Comparative Psychology 24(1) 1-35 (11,000 WORDS)
Show/hide abstract
The standard methodology of comparative psychology has long relied upon a distinction between cognition and ‘mere association’; cognitive explanations of nonhuman animals behaviors are only regarded as legitimate if associative explanations for these behaviors have been painstakingly ruled out. Over the last ten years, however, a crisis has broken out over the distinction, with researchers increasingly unsure how to apply it in practice. In particular, a recent generation of psychological models appear to satisfy existing criteria for both cognition and association.
Salvaging the standard methodology of comparative psychology will thus require significant conceptual redeployment. In this article, I trace the historical development of the distinction in comparative psychology, distinguishing two styles of approach. The first style tries to make out the distinction in terms of the properties of psychological models, for example by focusing on criteria like the presence of rules & propositions vs. links & nodes. The second style of approach attempts to operationalize the distinction by use of specific experimental tests for cognition performed on actual animals. I argue that neither style of criteria is self-sufficient, and both must cooperate in an iterative empirical investigation into the nature of animal minds if the distinction is to be reformed.
The application of digital humanities techniques to philosophy is changing the way scholars approach the discipline. This paper seeks to open a discussion about the difficulties, methods, opportunities, and dangers of creating and utilizing a formal representation of the discipline of philosophy. We review our current project, the Indiana Philosophy Ontology (InPhO) project, which uses a combination of automated methods and expert feedback to create a dynamic computational ontology for the discipline of philosophy. We argue that our distributed, expert-based approach
to modeling the discipline carries substantial practical and philosophical benefits over alternatives. We also discuss challenges facing our project (and any other similar project) as well as the future directions for digital philosophy afforded by formal modeling.
(with Joshua Alexander, Chad Gonnerman, and Jonathan Weinberg)
2010, Philosophical Psychology 23(3) 331-355 (11,000 WORDS)
Show/hide abstract
Recent experimental philosophy arguments have raised trouble for philosophers' reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there's no reason to think philosophers will make the same mistakes. But this deploys a substantive empirical claim, that philosophers' training indeed inculcates sufficient protection from such mistakes. We canvass the psychological literature on expertise, which indicates that people
are not generally very good at reckoning who will develop expertise under what circumstances. We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense.
Recent experimental philosophy arguments have raised trouble for philosophers' reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there's no reason to think philosophers will make the same mistakes. But this deploys a substantive empirical claim, that philosophers' training indeed inculcates sufficient protection from such mistakes. We canvass the psychological literature on expertise, which indicates that people
are not generally very good at reckoning who will develop expertise under what circumstances. We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense.
(with Kai Eckart, Mathias Niepert, Christof Niemann, Colin Allen,
and Heiner Stuckenschmidt)
2010 Proceedings of the 10th ACM/IEEE JCDL, Gold Coast, Australia, ACM Press. (9000 WORDS)
Show/hide abstract
The "wisdom of crowds" is accomplishing tasks that are cumbersome for individuals yet cannot be fully automated by means of specialized computer algorithms. One such task is the construction of thesauri and other types of concept hierarchies. Human expert feedback on the relatedness and relative generality of terms, however, can be aggregated to dynamically construct evolving concept hierarchies. The InPhO (Indiana Philosophy Ontology) project bootstraps feedback from volunteer users unskilled in ontology design into a precise representation of a specific domain. The approach combines statistical text processing methods with expert feedback and logic programming to create a dynamic semantic representation of the discipline of philosophy. In this paper, we show that results of comparable quality can be achieved by leveraging the workforce of crowdsourcing services such as the Amazon Mechanical Turk (AMT). In an extensive empirical study, we compare the feedback obtained from AMT's workers with that from the InPhO volunteer users providing an insight into qualitative differences of the two groups. Furthermore, we present a set of strategies for assessing the quality of different users when gold standards are missing. We finally use these methods to construct a concept hierarchy based on the feedback acquired from AMT workers.
(with Adam Shriver, Stephen Crowley, and Colin Allen)
2009, Behavioral and Brain Sciences 32(2) 140-141 (1,000 WORDS)
Show/hide abstract
Peter Carruthers argues that an integrated faculty of metarepresentation evolved for mindreading and was later exapted for metacognition. A more consistent application of his approach would regard metarepresentation in mindreading with the same skeptical rigor, concluding that the “faculty” may have been entirely exapted. Given this result, the usefulness of Carruthers’ line-drawing exercise is called into question.
(with Mathias Niepert, and Colin Allen)
Proceedings of the Workshop on Web 3.0:(SW)^2 at ACM Hypertext, Turin, Italy (3500 WORDS)
Show/hide abstract
The Indiana Philosophy Ontology (InPhO) project is presented as one of the first social-semantic web endeavors which aims to bootstrap feedback from users unskilled in ontology design into a precise representation of a specific domain. Our approach combines statistical text processing methods with expert feedback and logic programming approaches to create a dynamic semantic representation of the discipline of philosophy. We describe the basic principles and initial experimental results of our system.
InPhO is a system that combines statistical text processing, information extraction, human expert feedback, and logic programming to populate and extend a dynamic ontology for the field of philosophy. Integrated in the editorial work flow of the Stanford Encyclopedia of Philosophy (SEP), it will provide important metadata features such as automated generation of cross-references, semantic search, and ontology driven conceptual navigation.
(with Mathias Niepert and Colin Allen)
2008 Selected papers from the 9th Annual WebWise Conference. First Monday, 13(8) (4200 WORDS)
Show/hide abstract
The Indiana Philosophy Ontology (InPhO) is a "dynamic ontology" for the domain of philosophy derived from human input and software analysis. The structured nature of the ontology supports machine reasoning about philosophers and their ideas. It is dynamic because it tracks changes in the content of the online Stanford Encyclopedia of Philosophy. This paper discusses ways of managing the varying expertise of people who supply input to the InPhO and provide feedback on the automated methods.
The next generation of online reference works will require structured representations of their contents in order to support scholarly functions such as semantic search, automated generation of cross-references, tables of contents, and ontology-driven conceptual navigation. Many of these works can be expected to contain massive amounts of data and be updated dynamically, which limits the feasibility of "manually" coded ontologies to keep up with changes in content. However, relationships relevant to inferring an ontology can be recovered from statistical text processing, and these estimates can be verified with carefully-solicited expert feedback. In this paper, we explain a method by which we have used answer set programming on such expert feedback to dynamically populate and partially infer an ontology for a well-established, open-access reference work, the Stanford Encyclopedia of Philosophy.
This paper describes the design of new algorithms and the adjustment of existing algorithms to support the automated and semi-automated management of domain-rich metadata for an established digital humanities project, the Stanford Encyclopedia of Philosophy. Our approach starts with a "hand-built" formal ontology that is modified and extended by a combination of automated and semi-automated methods, thus becoming a "dynamic ontology". We assess the suitability of current information retrieval and information extraction methods for the task of automatically maintaining the ontology. We describe a novel measure of term-relatedness that appears to be particularly helpful for predicting hierarchical relationships in the ontology. We believe that our project makes a further contribution to information science by being the first to harness the collaboration inherent in a expert-maintained dynamic reference work to the task of maintaining and extending a formal ontology. We place special emphasis on the task of bringing domain expertise to bear on all phases of the development and deployment of the system, from the initial design of the software and ontology to its dynamic use in a fully operational digital reference work.
(with Mathias Niepert and Colin Allen)
Brief article in the APA Newsletter on Philosophy and Computers introducing the InPhO (1400 WORDS)
Show/hide abstract
The goals of the Indiana Philosophy Ontology (InPhO) project are to build and maintain a "dynamic ontology" for the discipline of philosophy, and to deploy this ontology in a variety of digital philosophy applications. Automated information-retrieval methods are combined with human feedback to build and manage a machine-readable representation (i.e., a "formal ontology") of the relations among philosophical ideas and thinkers. The applications we hope to develop that will employ the ontology include automatic generation of cross-references for Stanford Encyclopedia of Philosophy (SEP) articles, semantic search of the SEP and other philosophical resources (including guided searching with Noesis), conceptual navigation through the SEP using information visualization techniques, and web access to the biographical and citational information contained in the InPhO. Moreover, we will archive the dynamically generated versions of the ontology.