The central question that this volume seeks to answer is: What are the similarities and differences between how human beings know language and how artificial intelligence knows language? The recent development and popularization of artificial intelligence systems called Large Language Models (such as ChatGPT) have led to a proliferation of opinions regarding the relevance of these systems beyond the practical purposes for which they were designed. It is not uncommon to find statements in social networks and popular magazines, as well as in academic publications, to the effect that these language models have solved the problems that sciences such as linguistics aim to solve, that their success in generating text can be seen as a refutation of some particularly influential theories of language, or that Language Models are actually scientific theories of language. These statements seem to be based on the premise that the linguistic knowledge acquired by these systems is comparable to that developed by humans. This book aims to evaluate whether this assumption is warranted. To this end, the opinions of renowned linguists and other cognitive scientists have been gathered to answer questions such as what kind of language knowledge these artificial systems have, to what extent they are faithful models of natural language knowledge, and what we can learn about the human language faculty by examining their inner workings. Anyone interested in the nature of human language and mind and in artificial intelligence can follow the eight chapters of the book without being an expert in linguistics or computer science. This is the first comprehensive work to present the views of experts in linguistic theory on the relevant questions mentioned above, and to provide an accessible presentation of current research on the nature of artificial knowledge of language.
Nous publions uniquement les avis qui respectent les conditions requises. Consultez nos conditions pour les avis.