ContentThis page provides brief primers of concepts or notions I am developing or using. The definitions that follow are often taken from papers I wrote. More complete information can be found on the Papers and Research Activities pages. ubiquitous computing, is a concept that relies on the following predictions:
- Communicating «intelligent» devices that provide us, beside their functions, with new information and communication capabilities, will soon surround us everywhere.
- Most of them will be objects that we already work or live with (tools, appliances), augmented with sensors, actuators, processors and embedded software.
- Some of them will be mobile, either on their own (robots) or because they are being carried by people (cell phones or PDAs) or driven by them (lawnmowers, cars).
- Most will be context-aware in order to behave correctly and, possibly, in an autonomous way.
- They will be able to communicate with each other (locally or at a distance), interact with their environment and with the people in a natural way.
- They will seamlessly integrate into human collectivities, support collaborative work, and presumably change the way we work and interact with our fellow workers.
These devices will be the components of a kind of computing infrastructure that will radically differ from the ones we know today. As a matter of fact, these systems will:
- Not be possibly centrally controlled, either because it will be impractical, or because different people (users, owners, designers, etc.) will control their components,
- See their configuration vary over time, due to the dynamical introduction or deletion of components, or because of changes in the way people use and interact with them,
- Be immersed in human collectivities with various sizes and needs (not just one user at a time), and will operate with incomplete information about this social environment,
- Federate highly heterogeneous combinations of software and hardware, which will differ by their function or their processing, communication and action capabilities,
- Be the result of combinations of components that could not have been foreseen at design time but nevertheless produce (possibly) interesting emergent behaviors,
- Continuously need to adapt to their (social , physical and computing) environment in order to improve their efficiency.
We want these systems, of course, to behave in a predictable and user-friendly way, but we clearly lack, today, the tools or methods that would allow us to design, program and even use them. This is due to their physical and functional distribution, mobility and heterogeneity, which render them far more complex than todays networked computing systems. These issues are not likely to be addressed in a simple way and will require the development of new computing paradigms. Most of them will however derive from the concepts, tools, and methods developed in the field of DAI. This domain is one of the few to naturally deal with the decentralized mindset required for apprehending these systems. However, this will also require it to re-think many of its underlying models. As a matter of fact, most of the (theoretical or empirical) works on multi-agent systems rely on:
- Small populations of coarse-grained agents, while these new systems will probably exhibit massive sets of fine-grained entities,
- Homogeneous architectures, while incorporating heterogeneity into the design process will be a real necessity,
- Closed and static environments, while these systems will behave in open and highly dynamical ones,
- Deterministic organizational schemes, while there will be a need for more flexibility, including the emergence of collective properties,
- ethereal agents, while most of them will be embodied and situated.
Designing these new MAS will certainly require us to consider alternative sources of inspiration, other than economy or sociology. One promising direction (which I call Pervasive Intelligence, PI) is to view and design them as ecosystems of physical agents (heterogeneous, open, dynamical, and massive group of interacting agents), organized after biological, physical or chemical principles. Two domains of research illustrate this direction: reactive multi-agent systems and amorphous computing. The first one studies complex, self-organized, situated systems that rely on biological metaphors of communication and organization. The second one, which draws its inspiration from chemistry, tries to develop engineering principles to obtain coherent behavior from the cooperation of large numbers of unreliable computing parts connected in irregular and time-varying ways. Pervasive Intelligence would be an ideal combination of these two domains, synthesized in the following question: How to obtain coherent and predictable behaviors from the cooperation of large numbers of situated, heterogeneous agents that are interacting in unknown, irregular, and time-varying ways?. Answering this question would allow us to design truly intelligent, adaptive and autonomous pervasive computing systems.
See also this paper.tool). The resulting behaviors, however, are still hand-coded by computer scientists.
Autonomous agents can be advantageously used in this process. They may, for example, play a twofold role (see figure): that of computational abstractions for implementing the simulation and that of assistant-like or profiling agents dedicated to an user or an expert. If they are provided with adequate learning capabilities, they may then autonomously adjust or acquire their behavior through repeated interactions with (or observations of) their user. This kind of learning procedures are being actively investigated in many subfields of AI under different names: Adjustable Autonomy, Learning by Imitation , Learning by Demonstration, Social Learning, Interactive Situated Learning, etc. One of their main advantages is that the agent may learn its behavior in situation and in a progressive way, and that the expert is really involved in this process. The downside is that it requires, until now, domain-dependant formalisms for representing actions and contexts, which means that generic architectures will not be available before long. However, researches on this subject have already been launched in several simulation projects, and I am confident that machine learning, through agent-based simulation, will be part of any modelers toolkit in a few years.