Abstract
This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data.
Original language | English |
---|---|
Pages (from-to) | 129-148 |
Number of pages | 20 |
Journal | Artificial Intelligence |
Volume | 193 |
Issue number | null |
DOIs | |
Publication status | Published - Dec 2012 |
Bibliographical note
Copyright 2012 Elsevier B.V., All rights reserved.ASJC Scopus subject areas
- Linguistics and Language
- Language and Linguistics
- Artificial Intelligence