Ontology and AI
PHI 637 Ontology and AI Spring 2026
Spring 2026 - PHI637SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence
Asynchronous
Faculty: Barry Smith
This is a remote asynchronous course. Student presentations will be organised in a synchronous session, to be scheduled towards the end of April. Additional synchronous sessions may be organized during the course of the semester. One-on-one sessions with Dr Smith can be organised on request to phismith@buffalo.edu.
The course program below is divided into numbered weeks. The information for each week begins with links to that week's main video together with the underlying slides. (Transcriptions of the video will be added in the course of the semester). There is also additional background material which is provided as a starting point for further explorations on the part of each student. You should feel free to ignore it.
To get a grade for this class you need to submit to Dr Smith an essay on a topic of your choice relating to the interactions between ontology and AI, with one or other topics documented below as your starting point. On or before March 1 you should send an email to Dr Smith with a one-paragraph outline of your topic. Feel free to contact Dr Smith if you are unsure of what your topic should be.
All enrolled students must email to BS a Starting Draft version of their essay by April 1 at the latest. Further drafts may be needed in response to Dr Smith's editorial comments. Students must submit a final, full version of their essay and of the associated powerpoint deck by May 1.
Grading Essay word length requirements are as follows:
- PhD candidates:
- 2 credit hours: 2500 words / starting draft: 1000 words
- 3 credit hours: 2500 + 3500 words / starting draft: 1000 + 1000 words
- Masters candidates:
- 2 credit hours: 1500 words /starting draft: 750 words
- 3 credit hours: 1500 + 2000 words / starting draft: 750 + 750 words
- Undergraduate candidates
- 2 credit hours: 1000 words / starting draft: 500 words
- 3 credit hours: 1500 words / 500 + 500 words
3-credit-hour students may submit one single essay with the corresponding combined word count.
Grading will be assigned according to the following division:
- Essay(s): 40%
- Presentation (and accompanying powerpoint deck): 40%
- Class Participation (responses to presentations): 20%
'AI Policy
The starting draft of your essay, to be submitted to BS on or before April 1, should be your own work. This means no use of LLMs. All students are, however, welcome thereafter to use LLMs on polishing their starting drafts, providing that they follow these rules:
Option 1: Include a declaration on p. 1 to the effect that the essay was written entirely without any sort of AI assistance. I reserve the right to use software tools, but also my own judgment, to ensure this draft was written by you. Grades under option 1. will be determined by the quality of your essay.
Option 2 is in multiple steps:
- Step 1. Create a draft in your own words of an essay that is about half as long as your target length length. This should be a substantive draft, but it can contain for example rough notes pointing to further lines of development. Not only this initial draft, but also all further steps in the list below, should rely on study by you of the relevant literature. Both your draft and your final essay should accordingly contain lists of references.
- Step 2. Submit this draft to me at phismith@buffalo.edu by the middle of the semester.
- Step 3. You create a new prompt using your draft as an attachment with an instruction such as: show me how I can improve the attached. This will start a potentially long process of improvements in your essay incorporating further contributions from you together with assistance from the LLM. You should attempt to use prompts to manipulate the style of the LLM output in a direction of a style appropriate to serious academic research, with references, quotations, definitions, as needed. Most importantly: you should be aware that LLMs often make errors (called 'hallucinations'), for example inventing references in the literature which do not in fact exist.
- Step 4. the LLM has been keeping track of everything you tell it to do since you started the newchat. When you think you might be ready to submit, use the LLM save function to generate a URI linking to all the interactions thus far – effectively a log of your process. This log, together with your initial and final essay will for part of what will be evaluated for your grade.
- Step 5. When you truly are ready to submit, press save for one last time and take a note of the link; send me this link, together with your completed essay, and with any notes on features of the log you which to point out -- for example requests that I ignore specific chains of prompts because they proved to be dead ends.
Grades under Option 2 will be determined on the basis of (a) originality of the initial draft, (b) creativity of your prompts, (c) quality of final essay.
Attendance at the synchronous session featuring student presentations around May 1 is compulsory for all students
Introduction
Ontology (also called 'metaphysics') is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to support those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as 'intelligent'. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data, where the latter are obtained for example by crawling the internet.
Required reading
- Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022; revised and enlarged 2nd edition published in 2025).
See also offer here
Week 1: The Glory and the Misery of Large Language Models
Part 1: A brief introduction to Large Language Models such as ChatGPT. Focusing on booth positive and negative aspects of how they work.
Part 2: GPT-5 and the French and Indian War: Teach yourself history with ChatGPT
Questions to ponder
- What does 'stochastic' mean in 'stochastic AI'?
- What is 'scaling'?
- What are hallucinations?
Week 2: Ontology and the History of AI
Part 1: From Good Old Fashioned (Logical, Symbolic) AI to ChatGPT
Since its inception in the last century, AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.
In this first lecture we will address the origins of AI in Stanford University in the 1970s and '80s, and specifically in the work on common-sense ontology of Patrick Hayes and others.
Topics to be dealt with include:
- What is ontology?
- From Aristotle to 20th century philosophical ontology
- Patrick Hayes, Naive Physics and ontology-based robotics
- Doug Lennat and the CYC (for 'enCYClopedia' project)
- Why CYC failed
- Why ontology is still important to AI
Background:
- History of AI
- Where do ontologies come from?
- See also references to Hayes in Everything must go
Week 3: Limits of AI?
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage
2. Natural and engineered systems
3. The ontology of systems
4. Complex systems
5. The limits of Turing machines
6. Why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.
Conclusions:
- AI is a family of algorithms to automate repetitive events
- Deep neural networks have nothing to do with neurons
- AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data
Background reading:
- Marcus on superintelligence
- https://www.wheresyoured.at/
- https://x.com/jobstlandgrebe?lang=en
- https://ontology.buffalo.edu/smith/
Week 4: Machine Consciousness, Transhumanism, and Ecological Psychology
1. Jobst Landgrebe on mathematical definitions of consciousness
2. Surveys the spectrum of transhumanism
3. Debunks the feasibility of radically improving human beings via technology.
4. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence
5. J. J. Gibson, direct realism, and how our behavior is tuned to affordances
Background:
AI and the meaning of life:
- AI and The Matrix
- There is no general AI
- Landgrebe on Transhumanism
- Considering the existential risk of Artificial Superintelligence
- Scott Adams: We are living in a simulation
Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)
Are we living in a simulation?
- David Chalmers' Reality+
- Scott Adams: We are living in a simulation
- AI and The Matrix
- Slides
- Are we living in a simulation?
- On Chalmers on Reality+?
- The Future of Artificial Intelligence
Machine consciousness: Machines cannot have intentionality; they cannot have experiences which are about something.
Background
- Slides / :Video: Can a machine be conscious?
- Searle's Chinese Room Argument
- Searle: Minds, Brains, and Programs
- Making AI Meaningful Again
- Søgaard: Do Language Models Have Semantics?
- Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints
Week 5: AGI, Behavior Settings and Distributed Cognition
Part 1. Question-and-answer session with Jérémy Ravenel of naas.ai
Questions to be addressed include:
- What are you doing with BFO and LLMs?
- Can you rely on BFO still being operative in the proper way even after a new release of an LLM?
See also: Why is BFO so powerful?
Part 2. Niches and Intelligence
- Knowing how vs Knowing that
- Personal knowledge and science
- Creativity
- Empathy
- Entrepreneurship
- Leadership and control (and ruling the world)
Background
Week 6: Towards a theory of intelligence
Part 1. Definitions of intelligence
- A. the ability to adapt to new situations (applies both to humans and to animals)
- B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience
Can a machine be intelligent in either of these senses?
Can a team be intelligent?
See Ryan Muldoon, "Diversity and the Division of Cognitive Labor", Philosophy Compass 8 (2):117-125 (2013)
Can a team made of humans and AI systems be intelligent?
See M. Stelmaszak et al., "Artificial Intelligence as an Organizing Capability Arising from Human-Algorithm Relations", Journal of Management Studies, https://doi.org/10.1111/joms.70003
Part 2. What do IQ tests measure?
Readings:
- Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
- Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence
The context-dependence of human intelligence, and why AGI is impossible
Part 3. Affordances, tacit knowledge, cognitive niches, and the background of Artificial Intelligence
Background:
- Harry Heft, Ecological Psychology in Context
- There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
- 1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
- 2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
- 3. If you can't spot irony, you're not intelligent
Week 7: The Free Will Problem and the Problem of the Machine Will
Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics
- The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.
Implications of the absence of a machine will:
- The problem of the singularity (when machines will take over from humans) will not arise
- The idea of digital immortality will never be realized Slides
- There can be no AI ethics (only: ethics governing human beings when they use AI)
What is the basis of ethics as applied to humans?
- Utilitarianism
- Value ethics
- Responsiblity
No responsibility without objectifying intelligence
On what basis should we build an AI ethics?
On why AI ethics is (a) impossible, (b) unnecessary
Readings:
- Moor: Four kinds of ethical robots
- Jobst Landgrebe and Barry Smith: No AI Ethics
- Crane: The AI Ethics Hoax
Week 8: The Ontology of Consciousness
- John Searle
- On consciousness: the Chinese Room Argument
- Searle and Smith
- Neuroscience and consciousness
Week 9: Debates on ontology engineering: Part 1
Featuring John Beverley
Debating the following motions:
- Philosophy is irrelevant to ontology engineering
- Mappings merely give extra life to bad ontologies
- AI fear is justified
- BFO is too slow to react
- Knowledge graphs cannot prevent hallucinations
- There can never be AGI
Background
- Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.
The Ontological Foundation: A Cornerstone for Trustworthy AI with caveats added in bold face
- Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
- Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
- Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
- Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
- Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.
An introduction to the statistical foundations of AI Slides Video
The types of AI
- Deterministic AI
- Good old fashioned AI (GOFAI)
- Basic stochastic AI
- How regression works
- Advanced stochastic AI
- Neural networks and deep learning
- Hybrid
- Neurosymbolic AI
- Background reading: Why machines will never rule the world, 1e chapter 8, 2e chapter 9
Week 10: Debates on ontology engineering: Part 2
- Will combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures?
- The idea of digital immortality is idiotic
- We should allow AI research to proceed unregulated
- Even if you think AGI is impossible, you should treat robots at certain levels of sophistication as moral agents
- 'OWL semantics' have nothing to do with the semantics of ordinary language
- AI will take away our jobs
- There will never be driverless cars
- Science is not ready for software, let alone AI
Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.
Background
April 11: Deadline for submission to BS of starting drafts for your essays
PhD candidates:
- 2 credit hours: 2000 words / starting draft: 1000 words
- 3 credit hours: 2000 + 3000 words / 1000 + 1000 words
Masters candidates:
- 2 credit hours: 1500 words /starting draft: 750 words
- 3 credit hours: 1500 + 2000 words / 750 + 750 words
Undergraduate candidates
- 2 credit hours: 1000 words / starting draft: 500 words
- 3 credit hours: 1500 words / 500 + 500 words
Week 11: On Hallucinations and Political Correctness
Lecture by Jobst Landgrebe on:
- Why machines will never stop hallucinating
In current-day culture, concerns are raised when LLMs responds with symbol or pixel sequences which are seen as deviating from social norms of political correctness or wokeness -- or in other words, when they say the unsayable. Further problems are riased for LLM technology by the inconvenient fact of hallucinations, since this prevents their usage for task automation. LLM architects and engineers try to prevent both types of events. This talk shows why it is impossible to ensure that LLMs do not hallucinate or speak the unspeakable, drawing on arguments from the theory of computation (Turing decision/Rice theorem, Gödel's First Incompleteness Theorem).
Literature:
Glukhov et. al 2023, LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?
Banerjee et al. 2024, LLMs Will Always Hallucinate, and We Need to Live With This
Apple, The Illusion of Thinking
Week 12: Landgrebe on the Replication Crisis. Jacko on the Ontological Foundations of Proxemics
Part 1: Jobst Landgrebe: Complex Systems and Cognitive Science: Why the Replication Problem is here to stay
- The 'replication problem' is the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.
Jan Jacko: Ontological Foundations of Proxemics
- Proxemics is the study of spatial behaviour in interpersonal communication. It rests on a set of implicit and explicit assumptions about the nature of space, embodiment, intentionality, and meaning. This presentation aims to articulate these assumptions and outline a conceptual framework for understanding proxemics as an ontologically grounded discipline.
--
Background on the replication crisis:
- Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
- Science has been in a “replication crisis” for a decade
- Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
- Slides
- The replication problems which arise when AI applied in scientific research
- Is Psychology Finished?
- Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
- Science has been in a “replication crisis” for a decade
- Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
- Bayer tested some findings and only achieved a 21% replication rate for biomedical studies
The Ontological Foundation: A Cornerstone for Trustworthy AI, October 2024, with caveats added in 'bold face.
- Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
- Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
- Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
- Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
- Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay
- The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.
Background:
- Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
- Science has been in a “replication crisis” for a decade
- Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
Week 13: Landgrebe on machine intelligence. Jacko on psychopathic AI
Jobst Landgrebe: Why we cannot create intelligence inside a machine
Timothy W. Coleman: Beyond the Limits of AI: Ontology as a Framework for Good System Design (Student presentation)
Michael Behun III: The Paradox within Artificial Intelligence Development
Jan Jacko: Are intelligent machines psychopathic by design?
- There are two major paradigms in clinical psychology. The first treats mental and personality disorders as disturbances of an inner life: of subjective experience, affect, and self-awareness. This view cannot be meaningfully applied to artificial systems, for which no such subjectivity is given. The second paradigm is behavioural and functional. Here disorders, especially personality disorders, are defined as stable, recurrent patterns of behaviour, cognition, and interpersonal functioning that deviate from expected norms and impair adaptation. Psychopathy in this framework is a cluster of observable traits: persistent violation of social rules, instrumental treatment of others, chronically shallow or incongruent emotional expression, irresponsibility, and a striking absence of anxiety or inhibition in situations that normally elicit it. In this talk I adopt the second, behavioural paradigm and extend it to artificial systems, introducingthe notion of AI quasi-personality.
Week 14: Oral presentations (Compulsory for all students)
4:00 John Davis: Symbiotic Surveillance and Artificial Intelligence
4:15 Rachel Mavrovich: The Relevance and Necessity of Phenomenology in Successful Work with Data
4:30 Cristian Keroles: Scientific Realism, Paradigm Shifts, and the Feasibility of AGI
4:45 Mike Behun Jr.: Examining the Role of Formal Ontology and Hybrid AI in Achieving Trustworthy Results, Based on Domain Experts for High Stakes Systems.
5:00 Ore Afe: Ethical Dilemmas in Our Evolving Technological Society
5:15 Gregory DeFranco: Will Algorithms Control Us?
5:30 Claire Allen: Video Games and the Virtual World
5:45 John Hogan: Artificial Unintelligence
Background Material
An Introduction to AI for Philosophers
An Introduction to Philosophy for Computer Scientists
John McCarthy, "What has AI in common with philosophy?"
Companion volume to Why Machines Will Never Rule the World
Podcasts and interviews on Why Machines Will Never Rule the World
Student Learning Outcomes
1. Comprehend the Architecture and Operation of Large Language Models: Explain the basic design and functioning of Large Language Models (LLMs) such as ChatGPT. Define and use correctly key terms
2. Evaluate the Theoretical and Practical Limits of AI: Explain the limitations of AI systems as applications of Turing-computable mathematics. Critically assess claims about Artificial General Intelligence (AGI) and the “singularity.”
3. Examine Theories of Machine Consciousness, Transhumanism, and Simulation: Explain why machines lack intentionality and subjective experience.
4. Understand Ethical and Normative Dimensions of AI: Explain why AI systems cannot possess will, intention, or moral responsibility, and differentiate between AI ethics and ethics of AI use.
5. Apply Ontology-Based Strategies for AI Enhancement: Explain how ontologies and knowledge graphs can improve the explainability, consistency, and interoperability of AI systems. Identify strengths and weaknesses of ontology-based and neurosymbolic AI approaches.