Salta al contenuto principale Salta al menu principale di navigazione Salta al piè di pagina del sito
  • Registrazione
  • Login
  • Language
    • Deutsch
    • English
    • Español (España)
    • Français (France)
    • Italiano
    • Język Polski
  • Menu
  • Home
  • Ultimo fascicolo
  • Archivi
  • Avvisi
  • Info
    • Sulla rivista
    • Proposte
    • Comitato Scientifico ed Editoriale
    • Scientific Council
    • Reviewers
    • Review process
    • Open Access Policy
    • Ethical Standards
    • Article Processing Charges and Submission Charges
    • Dichiarazione sulla privacy
    • Archiving policy
    • Contatti
  • Registrazione
  • Login
  • Language:
  • Deutsch
  • English
  • Español (España)
  • Français (France)
  • Italiano
  • Język Polski

Scientia et Fides

The Epistemological AI Turn: From JTB to Knowledge*
  • Home
  • /
  • The Epistemological AI Turn: From JTB to Knowledge*
  1. Home /
  2. Archivi /
  3. FORTHCOMING /
  4. Articles: AI from the Philosophical and Religious Perspectives

The Epistemological AI Turn: From JTB to Knowledge*

Autori

  • Roman Krzanowski The Pontifical University of John Paul II in Krakow https://orcid.org/0000-0002-8753-0957
  • Izabela Lipińska AI Ethics Independent Researcher https://orcid.org/0000-0002-5745-5773

DOI:

https://doi.org/10.12775/SetF.2026.004

Parole chiave

LLM systems, Knowledge as JTB, Knowledge in AI systems, LLM and Christian Religion, LLM and religious truth

Abstract

This paper critically examines whether Large Language Models (LLMs) possess knowledge in the sense of the Justified True Belief (JTB) framework. While LLMs excel at tasks like summarization, translation, and content generation, they lack belief, justification, and truth-evaluation—key components of JTB. Attributing human-like knowledge to LLMs is a category mistake. To clarify this distinction, we introduce knowledge⁎: a term for the structured linguistic outputs of LLMs, which simulate cognition without understanding. LLMs are not epistemic agents but tools that can augment human thought when used critically and ethically. This “epistemological AI turn” calls for reevaluating what counts as knowledge in AI systems.We also consider theological implications of LLM generated knowledge. LLMs, lacking conscience or moral sense, risk detaching knowledge from ethical grounding. In normative traditions like Christianity, knowledge is inseparable from moral responsibility. If AI-generated religious texts are mistaken for genuine insight, they may foster a form of “algorithmic gnosis”—stylized but hollow content that mimics sacred language without meaning. Such use could erode the spiritual depth and moral seriousness of religious expression. As AI takes on authoritative roles, society must guard against confusing knowledge* with true, embodied, ethically accountable knowing.

Biografia autore

Roman Krzanowski, The Pontifical University of John Paul II in Krakow

Roman Krzanowski, Ph.D., D.Phil., holds degrees in engineering, philosophy, and information science. He is an assistant professor at UPJPII in Krakow and serves as the secretary of the PAU Commission on the Philosophy of Science. His research focuses on spatial information systems, the philosophy of information, and the philosophy of computing, AI, and ethics. Krzanowski has published extensively on topics such as the philosophy of AI and AI ethics, genetic algorithms, AI NLP systems, phronesis in AI, and the cognitive and ethical gap between AI and humans.

Riferimenti bibliografici

Aslett, K., Sanderson, Z., Godel, W. et al. (2024). Online searches to evaluate misinformation can increase its perceived veracity. Nature 625, 548–556 (2024). https://doi.org/10.1038/s41586-023-06883-y.

Audi, R. (2011). Epistemology. Taylor and Francis.

Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., ... & Sutton, C. (2021). Program synthesis with large language models. arXiv preprint arXiv:2108.07732.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). https://doi.org/10.1145/3442188.3445922.

Bennet, M., Dennett, D., Hacker, P., & Searle, J. (2007). Neuroscience and philosophy. Columbia University Press.

Bürger, L., Hamprecht, F. A., & Nadler, B. (2024). Truth is universal: Robust detection of lies in llms. Advances in Neural Information Processing Systems, 37, 138393-138431.

Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., ... Xie, X. (2023). A survey on evaluation of large language models. arXiv. https://arxiv.org/abs/2307.03109.

Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. D. O., Kaplan, J., ... & Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.

Chiang, W. L., Zheng, L., Sheng, Y., Angelopoulos, A. N., Li, T., Li, D., ... & Stoica, I. (2024, March). Chatbot arena: An open platform for evaluating llms by human preference. In Forty-first International Conference on Machine Learning.

Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., & Tafjord, O. (2018). Think you have solved question answering? Try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.

Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., ... & Schulman, J. (2021). Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.

Dicasterium pro Doctrina Fidei. (2025). Antiqua et Nova: On the Human Person and the New Technologies. Vatican City. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html

Dietrich, E., Fields, C., Sullins, J., van Heuveln, B., & Zebrowski, R. (2023). Great philosophical objections to artificial intelligence. Bloomsbury Academic.

Fierro, C., Dhar, R., Stamatiou, F., Garneau, N., & Søgaard, A. (2024). Defining knowledge: Bridging epistemology and large language models. arXiv preprint arXiv:2410.02499.

Gettier, E. (1996). Is justified true belief knowledge? In K. G. Lucey (Ed.), On knowing and the known. Prometheus Books.

Hadi, M. U., Qureshi, R., Shah, A., Irfan, M., Zafar, A., Shaikh, M., ... Mirjalili, S. (2023). A survey on large language models: Applications, challenges, limitations, and practical usage. TechRxiv. https://doi.org/10.36227/techrxiv.23457763.

Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J. (2020). Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.

Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., ... & Steinhardt, J. (2021). Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874.

Hodges, W. (2022). "Tarski’s Truth Definitions", The Stanford Encyclopedia of Philosophy (Winter 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/win2022/entries/tarski-truth/>.

Ichikawa, J. J., & Steup, M. (2024). The analysis of knowledge. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2024 Edition). https://plato.stanford.edu/archives/fall2024/entries/knowledge-analysis/.

Kassner, N., Tafjord, O., Sabharwal, A., Richardson, K., Schuetze, H., & Clark, P. (2023). Language models with rationality. arXiv preprint arXiv:2305.14250.

Krzanowski, R. and T. Marcinow. (2024). Advances of Philosophy in AI. Sciendo. DOI: 10.2478/9788368412000, on-line ebook: https://sciendo.com/pl/book/9788368412000.

Levy S. (2024). AI Is a Black Box. Anthropic Figured Out a Way to Look Inside. Wired. https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features/.

Lin, S., Hilton, J., & Evans, O. (2021). Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958.

Liu, Y., Cao, J., Liu, C., Ding, K., & Jin, L. (2024). Datasets for large language models: A comprehensive survey. arXiv preprint arXiv:2402.18041.

Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Vintage.

Marks, S., & Tegmark, M. (2023). The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. arXiv preprint arXiv:2310.06824.

McCarthy, B. (2021). Misinformation and the Jan. 6 Insurrection: When ‘Patriot Warriors’ Were Fed Lies https://www.politifact.com/article/2021/jun/30/misinformation-and-jan-6-insurrection-when-patriot/ (PolitiFact, 2021).

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.

Pavlick E. (2023).Symbols and grounding in large language models. Philos Trans A Math Phys Eng Sci. 2023 Jul 24;381(2251):20220041. doi: 10.1098/rsta.2022.0041. Epub 2023 Jun 5.

Piantadosi, S. T., & Hill, F. (2022). Meaning without reference in large language models. arXiv preprint arXiv:2208.02957.

Raiaan, M. A. K., et al. (2024). A review on large language models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access, 12, 26839–26874. https://doi.org/10.1109/ACCESS.2024.3365742.

Samuylova, E. (2025). 20 LLM evaluation benchmarks and how they work. EvidenlyAI. https://www.evidentlyai.com/llm-guide/llm-benchmarks#domain-specific-benchmarks.

Searle, J. (1996). The construction of social reality. Penguin Inc.

Singhal, K., Azizi, S., Tu, T. et al. (2023). Large language models encode clinical knowledge. Nature 620, 172–180. https://doi.org/10.1038/s41586-023-06291-2.

Sloman, A. (1982). Computational Epistemology. Proceedings of the 2nd and 3rd Advanced Courses in Genetic Epistemology, organised by the Fondation Archives Jean Piaget in 1980 and 1981.Geneva: Foundation Archives Jean Piaget, 1982. - P. 49-93.

Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., ... & Wang, G. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.

Truncellito, D. (n.d.). Epistemology. Internet Encyclopedia of Philosophy. https://iep.utm.edu/epistemo/

Turri, J., M, A., and J. Greco, "Virtue Epistemology", The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/epistemology-virtue/>.

Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., ... & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, 24824-24837.

Wolfram, S. (2023a). What is ChatGPT doing… and why does it work? [Video]. YouTube. https://www.youtube.com/watch?v=flXrLGPY3SU.

Wolfram, S. (2023b). All-In Summit: Stephen Wolfram on computation, AI, and the nature of the universe [Video]. YouTube. https://www.youtube.com/watch?v=2cQmQIYNI5M.

Wolfram, S. (2025). Tensors. Wolfram MathWorld. https://mathworld.wolfram.com/Tensor.html.

Xie, Q., Han, W., Chen, Z., Xiang, R., Zhang, X., He, Y., ... & Huang, J. (2024). Finben: A holistic financial benchmark for large language models. Advances in Neural Information Processing Systems, 37, 95716-95743.

Zagzebski, L. (2009). On epistemology. Wadsworth.

Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., & Choi, Y. (2019). Hellaswag: Can a machine really finish your sentence?. arXiv preprint arXiv:1905.07830.

Zeng, F., & Gao, W. (2024). Justilm: Few-shot justification generation for explainable fact-checking of real-world claims. Transactions of the Association for Computational Linguistics, 12, 334-354.

Zhang, Z., Lei, L., Wu, L., Sun, R., Huang, Y., Long, C., ... & Huang, M. (2023). SafetyBench: Evaluating the safety of large language models. arXiv preprint arXiv:2309.07045.

Scientia et Fides

Downloads

  • PDF (English)

Pubblicato

2026-01-23

Come citare

1.
KRZANOWSKI, Roman e LIPIŃSKA, Izabela. The Epistemological AI Turn: From JTB to Knowledge*. Scientia et Fides. Online. 23 gennaio 2026. [Accessed 25 gennaio 2026]. DOI 10.12775/SetF.2026.004.
  • ISO 690
  • ACM
  • ACS
  • APA
  • ABNT
  • Chicago
  • Harvard
  • IEEE
  • MLA
  • Turabian
  • Vancouver
Scarica citazione
  • Endnote/Zotero/Mendeley (RIS)
  • BibTeX

Fascicolo

FORTHCOMING

Sezione

Articles: AI from the Philosophical and Religious Perspectives

Licenza

Copyright (c) 2026 Roman Krzanowski, izabela Lipinska

Creative Commons License

Questo lavoro è fornito con la licenza Creative Commons Attribuzione - Non opere derivate 4.0 Internazionale.

CC BY ND 4.0. The Creator/Contributor is the Licensor, who grants the Licensee a non-exclusive license to use the Work on the fields indicated in the License Agreement.

  • The Licensor grants the Licensee a non-exclusive license to use the Work/related rights item specified in § 1 within the following fields: a) recording of Work/related rights item; b) reproduction (multiplication) of Work/related rights item in print and digital technology (e-book, audiobook); c) placing the copies of the multiplied Work/related rights item on the market; d) entering the Work/related rights item to computer memory; e) distribution of the work in electronic version in the open access form on the basis of Creative Commons license (CC BY-ND 3.0) via the digital platform of the Nicolaus Copernicus University Press and file repository of the Nicolaus Copernicus University.
  • Usage of the recorded Work by the Licensee within the above fields is not restricted by time, numbers or territory.
  • The Licensor grants the license for the Work/related rights item to the Licensee free of charge and for an unspecified period of time.

FULL TEXT License Agreement

Stats

Number of views and downloads: 18
Number of citations: 0

ISSN/eISSN

ISSN: 2300-7648

eISSN: 2353-5636

Search

Search

Browse

  • Scorri l'indice per Autori
  • Issue archive

User

User

Ultimo numero

  • Lodo Atom
  • Logo RSS2
  • Logo RSS1

Informazioni

  • per i lettori
  • Per gli autori
  • Per i bibliotecari

Newsletter

Subscribe Unsubscribe

Lingua

  • Deutsch
  • English
  • Español (España)
  • Français (France)
  • Italiano
  • Język Polski

Tags

Search using one of provided tags:

LLM systems, Knowledge as JTB, Knowledge in AI systems, LLM and Christian Religion, LLM and religious truth
Su

Akademicka Platforma Czasopism

Najlepsze czasopisma naukowe i akademickie w jednym miejscu

apcz.umk.pl

Partners

  • Akademia Ignatianum w Krakowie
  • Akademickie Towarzystwo Andragogiczne
  • Fundacja Copernicus na rzecz Rozwoju Badań Naukowych
  • Instytut Historii im. Tadeusza Manteuffla Polskiej Akademii Nauk
  • Instytut Kultur Śródziemnomorskich i Orientalnych PAN
  • Instytut Tomistyczny
  • Karmelitański Instytut Duchowości w Krakowie
  • Ministerstwo Kultury i Dziedzictwa Narodowego
  • Państwowa Akademia Nauk Stosowanych w Krośnie
  • Państwowa Akademia Nauk Stosowanych we Włocławku
  • Państwowa Wyższa Szkoła Zawodowa im. Stanisława Pigonia w Krośnie
  • Polska Fundacja Przemysłu Kosmicznego
  • Polskie Towarzystwo Ekonomiczne
  • Polskie Towarzystwo Ludoznawcze
  • Towarzystwo Miłośników Torunia
  • Towarzystwo Naukowe w Toruniu
  • Uniwersytet im. Adama Mickiewicza w Poznaniu
  • Uniwersytet Komisji Edukacji Narodowej w Krakowie
  • Uniwersytet Mikołaja Kopernika
  • Uniwersytet w Białymstoku
  • Uniwersytet Warszawski
  • Wojewódzka Biblioteka Publiczna - Książnica Kopernikańska
  • Wyższe Seminarium Duchowne w Pelplinie / Wydawnictwo Diecezjalne „Bernardinum" w Pelplinie

© 2021- Nicolaus Copernicus University Accessibility statement Shop