An ACL for companion agents that can choose (not) to lie

Abstract : The principles of Agent Communication Languages (ACL) were rethought in 1998 when Singh [21] criticised mental-ist approaches for relying on private mental attitudes and strong hypotheses incompatible with heterogeneous Multi-Agent Systems. We believe that now is the time to rethink them again to cope with hybrid MAS where artificial agents now have to communicate with humans. It is thus becoming important to differentiate between uttering words as an attempt to reach some goal, and actually succeeding to perform a speech act, since such an attempt can fail for various reasons. In this paper we thus propose a novel approach to the semantics of ACL that is more faithful to the philosophy of language, by distinguishing the locutionary and illocutionary levels of speech acts, as first defined by Austin and later studied by Searle and Vanderveken.
Complete list of metadatas

Cited literature [22 references]  Display  Hide  Download
Contributor : Grégory Mounié <>
Submitted on : Monday, February 20, 2017 - 2:23:46 PM
Last modification on : Thursday, October 24, 2019 - 2:44:11 PM
Long-term archiving on : Sunday, May 21, 2017 - 2:09:57 PM


Files produced by the author(s)


  • HAL Id : hal-01471961, version 1


Carole Adam, Benoit Gaudou. An ACL for companion agents that can choose (not) to lie. [Research Report] RR-LIG-043, LIG. 2013. ⟨hal-01471961⟩



Record views


Files downloads