- Technical Program
- Workshops & Tutorials
- At a glance
- Doctoral Consortium
- Opening & Reception
- Best Papers from Sister Conferences Track
- IJCAI Video Track
- Trading Agent Competion (TAC)
- IJCAI-11 Awards
- Funding Opportunities for International Research Collaborations
- General Game Playing Competition
- Ramon Llull Session
- Industry Day
- Closing Event
- List of Accepted Papers
- Poster Boards
IJCAI Video Track
July 19-21, 2011
The winner of the IJCAI Video Track was: Using Experience to Generate New Regulations (AUTHORS: Javier Morales (Universitat de Barcelona); Maite Lòpez-Sànchez (Universitat de Barcelona); and Marc Esteva (Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC)).
Recognizing the potential of video presentations to demonstrate and augment AI research results, IJCAI 2011 is reinstating the Video Track that was an integral part of the IJCAI technical programs from IJCAI-89 until IJCAI-97. Video submissions were invited on significant, original, and previously unpublished research on all aspects of artificial intelligence. Submissions were also invited that are designed to educate students and/or engage the public regarding the state of the art in AI.
A small number of "stand-alone" video submissions were received. Stand alone submissions were fully reviewed based on their i) technical content (value to AI researchers), ii) their educational/outreach content (value to students and the general public), and iii) their presentation/production quality.
However most submissions were "accepted paper companion" videos, meaning that they were submitted to accompany papers that had already been accepted through the normal review process. These submissions were lightly reviewed for their presentation/production quality.
Overall, 22 videos were submitted, of which 19 were accepted for presentation at IJCAI. Accepted videos will be screened at the coffee breaks and lounge areas throughout the conference. In addition, Room 111 will be a dedicated room with scheduled screening times. Video authors are encouraged to be present to answer questions.
Accepted videos will be linked from the conference webpage and hosted permanently at ijcai.org with a link to the associated abstract and/or paper.
A small number of nominees for the best video award will be screened during the Best Paper track on Tuesday from 11:50-12:10. Conference attendees are invited to vote for the award. The winner will be announced during the Closing Event on Friday.
AUTHORS: Alejandro Agostini (Institut de Robòtica i Informàtica Industrial (CSIC-UPC) Barcelona); Carme Torras (Institut de Robòtica i Informàtica Industrial (CSIC-UPC) Barcelona); and Florentin Woergoetter (Bernstein Center for Computational Neuroscience Goettingen).
VIDEO DURATION: 3:29
ABSTRACT: In the video we present a system that integrates AI techniques for planning and learning in two real robot platforms, the humanoid ARMAR III and the Staeubli arm. The system permits accomplishing the demanding requirements of real robot applications of fast learning of new behaviours without disrupting the ongoing activity. Fast learning is possible since the learning module evaluates and updates in parallel many different explanations of cause-effects that may result from action executions. The learning method constantly generates and refines planning operators from those explanations that most successfully explain the outcomes of the experienced actions. All the mechanisms of the system are integrated and synchronized in the robots using a general decision-making framework that permits closing the planning-learning loop in an online way. In both cases the system starts operating with an empty behaviour database. To show the synergies between the integrated components, we use a task based on the test application of Sokoban: given a goal specification, consisting of a target object to be moved and its desired destination, the robot should learn to move objects in an ordered way to accomplish the goal. The implementation in the humanoid ARMAR III consists in moving a green cup on a sideboard without colliding with other cups. The movements of the cups are performed through pick and place with grasping. The implementation in the Staeubli arm, in turn, consists in moving a target counter, marked in red, to a goal cell in a restricted 3x3 grid world where all the cells are occupied by counters except one. In this case collisions are allowed.
AUTHORS: Pablo Almajano Francoy (Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC)); Tomas Trescak (Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC)); Marc Esteva (Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC)); Inmaculada Rodriguez (WAI Volume Visualization and Artificial Intelligence Research Group Departament de Matemàtica Aplicada i Anàlisi, MAiA Facultat de Matemàtiques, Universitat de Barcelona); and Maite Lopez-Sanchez (WAI Volume Visualization and Artificial Intelligence Research Group Departament de Matemàtica Aplicada i Anàlisi, MAiA Facultat de Matemàtiques, Universitat de Barcelona).
VIDEO DURATION: 04:36
ABSTRACT: The field of Multiagent Systems (MAS) focuses on the design and development of systems composed of autonomous entities which act in order to achieve their common or individual goals. Although humans can be seen as autonomous entities most of the MAS methodologies and infrastructures do not consider direct human participation. In general, human role is limited to acting behind the scenes by customising provided agent templates that participate in the system on humans' behalf. In order to solve this problem we propose using 3D Virtual Worlds, which is one of a very few technologies that provides all the necessary means for direct human inclusion into software systems. 3D Virtual Worlds are 3D graphical environments where humans participate represented as graphical embodied characters (avatars) and can operate there using simple and intuitive control facilities. We advocate that 3D Virtual Worlds technology can be successfully used for "opening" multiagent systems to humans. This idea is taken in Virtual Institutions, which combines Electronic Institutions and 3D Virtual Worlds to engineer applications where participants may be human and software agents. In this demo, we present a Virtual Institution for water rights negotiation (virtual mWater). We explain its implementation using the Virtual Institution Execution Environment (VIXEE). Main features of the infrastructure are i) the causal connection between Virtual Worlds and Electronic Institutions, ii) the automatic generation and update of the VIs' 3D visualization and iii) the simultaneous participation of users from different virtual world platforms. We show the result of generating a 3D representation of virtual mWater from its specification and an example of human immersion within the institution.
AUTHORS: Ofra Amirv (Ben-Gurion University of The Negev) and Ya'akov (Kobi) Gal (Ben-Gurion University of The Negev)
VIDEO DURATION: 03:48
ABSTRACT: This video presents a plan recognition algorithm for inferring student behavior using virtual science laboratories. The video demonstrates the motivation for the project as well as the software used in theempirical evaluation. It describes the plan recognition algorithm in general and shows an application for visualizing students' plansto teachers. The video accompanies the paper "Plan Recognition in Virtual Laboratories". Motivation for research and results of study: Automatic recognition of students' activities in virtual laboratories can provide important information to teachers as well as serve as the basis for intelligent tutoring. Student use of virtual laboratories presents several challenges: Students may repeat activities indefinitely, interleave between activities, and engage in exploratory behavior usingtrial-and-error. The algorithm was evaluated empirically on data obtained from college students using virtual laboratory software for teaching chemistry. Results show that the algorithm was able to (1) infer the plans used by students to construct their models; (2) recognize such key processes as titration and dilution when they occurred in students' work; (3) identify partial solutions; (4) isolate sequences of actions that were part of a single error.
AUTHORS: Xiaoping Chen (University of Science and Technology of China); Feng Wang (University of Science and Technology of China); Guoqiang Jin (University of Science and Technology of China); Jiongkun Xie (University of Science and Technology of China); Zhiqiang Sui (University of Science and Technology of China); Xiang Ke (University of Science and Technology of China); Min Cheng (University of Science and Technology of China; and Kai Chen Jianmin Ji (The Hong Kong University of Science and Technology).
VIDEO DURATION: 07:08
ABSTRACT: A service robot is expected to be able work in changing environments. Users may change their minds after they have given their unspecific requests to the robot. They may ask the robot to complete a new task with which the robot's knowledge is insufficient. The environment (including the locations of users) may change from time to time. In order to cope with these changes, the robot must be able to understand the users' requests, acquire knowledge, generate plans and act in a timely, proper and effective manner. The demo shows a test on a service robot in a scenario in which all the changing factors mentioned above are integrated. Yet the robot is developed with principled technologies, including situated NLP based on refined Update Semantics and task planning based on hierarchical Answer Set Programming.
AUTHORS: Hong-Jie Dai (Institute of Information Science, Academia Sinica); Chi-Yang Wu (Institute of Information Science, Academia Sinica); Yen-Ching Chang (Institute of Information Science, Academia Sinica); and Wen-Lian Hsu (Institute of Information Science, Academia Sinica).
VIDEO DURATION: 04:41
ABSTRACT: This video mainly demonstrates the feature of our tool PubMed-EX. This a Firefox add-on we developed that marks up PubMed search results with additonal information retrieved from our text-mining services. Normal Pubmed search results are compared with results processed by our tool, which contains extra annotation of genes and disease terms. In addition to providing gene and disease term information, semantic relations of certain biomedical verbs that exists in the abstract are also organized and shown. Extraction of this kind of information may be convenient for researchers. For those who are interested in our tool, it is available on the web and open for tryouts.
AUTHORS: Aurélie Favier (INRA); Simon de Givry (INRA); Andrés Legarra (INRA); and Thomas Schiex (INRA).
VIDEO DURATION: 05:37
ABSTRACT: We propose a new additive decomposition of probability tables that preserves equivalence of the joint distribution while reducing the size of potentials, without extra variables. We formulate the Most Probable Explanation (MPE) problem in belief networks as a Weighted Constraint Satisfaction Problem (WCSP). Our pairwise decomposition allows to replace a cost function with smaller-arity functions. The resulting pairwise decomposed WCSP is then easier to solve using state-of-the-art WCSP techniques. Although testing pairwise decomposition is equivalent to testing pairwise independence in the original belief network, we show how to efficiently test and enforce it, even in the presence of hard constraints. Furthermore, we infer additional information from the resulting nonbinary cost functions by projecting&subtracting them on binary functions. We observed huge improvements by preprocessing with pairwise decomposition and project&subtract compared to the current state-of-the-art solvers on two difficult sets of benchmark.
AUTHORS: Andrew Finch (NICT); Wei Song (University of Tokyo); Kumiko Tanaka-Ishii (University of Tokyo); and Eiichiro Sumita (NICT).
VIDEO DURATION: 03:17
ABSTRACT: In this video we demonstrate a novel user interface for mobile devices that integrates two popular approaches to language translation for travelers allowing multimodal communication between the parties involved: the picture-book, in which the user simply points to multiple picture icons representing what they want to say, and the statistical machine translation system that can translate arbitrary word sequences. picoTrans uses a picture sequence as its primary mode of input, and generates natural language in both languages from this. The system tightly couples these two translation strategies within a framework that inherits many of the the positive features of both approaches, while at the same time mitigating their main weaknesses. The video shows the advantages of our approach through footage of the user interface in use on an Apple iPad mobile tablet.
AUTHORS: Maria Fox (University of Strathclyde), Derek Long (University of Strathclyde), Daniele Magazzeni (University G. D'Annunzio -Chieti Pescara).
VIDEO DURATION: 06:40
ABSTRACT: Efficient use of multiple independent batteries is a practical problem with wide and growing application. The problem of managing known loads can be cast as a deterministic planning problem. Then, the problem of handling stochastic loads can be solved by learning a policy from a large number of deterministic plans. We describe the approach we have adopted to modelling and solving this problem, and building effective policies for battery switching in the face of stochastic load profiles. Our solution exploits and adapts several existing techniques from the planning literature and leads to the construction of policies that significantly outperform those that are currently in use and the best published solutions to the battery management problem. We obtain solutions that achieve more than 99% efficiency compared with the theoretical limit and do so with far fewer battery switches than existing policies. The benefit of our approach is in extended battery lifetimes and massively reduced switching, which allows the use of fewer batteries to service a load and reduced energy lost as heat.
AUTHORS: Marc Hanheide (University of Birmingham); Charles Gretton (University of Birmingham); Richard W Dearden (University of Birmingham); Nick A Hawes (University of Birmingham); Jeremy L Wyatt (University of Birmingham); Andrzej Pronobis (KTH Stockholm); Alper Aydemir (KTH Stockholm); Moritz Göbelbecker (University of Freiburg); and Hendrik Zender (DFKI Saarbrücken).
VIDEO DURATION: 6:26
ABSTRACT: Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. In this video, we present our robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. The first contribution that is featured in the video is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particular environment. Our second contribution is a continual planning system which is able to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We demonstrate the system in an object search tasks in a real-world indoor environments: The robot is equipped with probabilistic common-sense knowledge about the presence of certain types of objects in particular types of rooms (e.g., the probability of finding cornflakes in kitchens) and exploits this knowledge to find objects more efficiently.
AUTHORS: Laura M. Hiatt (Naval Research Laboratory); Anthony M. Harrison (Naval Research Laboratory); and J. Gregory Trafton (Naval Research Laboratory).
VIDEO DURATION: 01:41
ABSTRACT: The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this work, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the human's behavior. This allows the cognitive robot to account for variances due to both different knowledge and beliefs about the world, as well as different possible paths the human could take with a given set of knowledge and beliefs. In this video, a robot is presented with a situation where a human's stated goal does not make sense to the robot because the human holds an outdated belief. The robot runs simulations of different possible cognitive models of the human to identify the discrepancy and explain the human's unexpected behavior. With this knowledge in hand, the robot is then able to rectify the human's incorrect belief and be a more effective teammate.
AUTHORS: Jens Kober (MPI Tübingen); Erhan Oztop(ATR, Japan); and Jan Peters (MPI Tübingen).
VIDEO DURATION: 04:06
ABSTRACT: Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However with current techniques, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a related situation. A method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we describe how to learn such mappings from circumstances to meta-parameters using reinforcement learning. In particular we use a kernelized version of the reward-weighted regression. We show two robot applications of the presented setup in robotic domains; the generalization of throwing movements in darts, and of hitting movements in table tennis. We demonstrate that both tasks can be learned successfully using simulated and real robots. The video illustrates the motivation, the setup, the policy updates, and the experiments.
AUTHORS: Ranjitha Kumar (Stanford University); Jerry O. Talton (Stanford University); Salman Ahmad (Stanford University); Tim Roughgarden (Stanford University); and Scott R. Klemmer (Stanford University).
VIDEO DURATION: 02:05
ABSTRACT: The Web provides a corpus of design examples unparalleled in human history. However, leveraging existing designs to produce new pages is often difficult. We introduce Bricolage, an algorithm for example-based Web design. Bricolage employs a novel, flexible tree matching technique that learns to create coherent mappings between pages by training on human-generated exemplars. The produced mappings are then used to automatically transfer the content from one page into the style and layout of another. This video shows that Bricolage can learn to accurately reproduce human page mappings, and that it provides a general, efficient, and automatic technique for retargeting content between a variety of real Web pages.
AUTHORS: Javier Morales (Universitat de Barcelona); Maite Lòpez-Sànchez (Universitat de Barcelona); and Marc Esteva (Artificial Intelligence Research Institute (IIIA) Spanish Scientific Research Council (CSIC)).
VIDEO DURATION: 06:00
ABSTRACT: In any society, individuals continuously interact among them, and sometimes conflicts raise naturally. It has been proven that regulations are useful to enhance the running of societies. For this reason, humans have developed laws that regulate individuals' behavior. MAS societies can be also enhanced by including specific regulations that promote a desired system's behavior. However, key questions are: "When to generate new regulations?", "How to generate them?" and "How to know if the generated set of norms is correct?". We propose a Norm Generation method for Multi-Agent Systems that generates new regulations whenever new conflicts arise. Regulations are generated learning from previous similar experiences, using an unsupervised version of classical Case-Based Reasoning. They are continuously evaluated in terms of their effectiveness and necessity in order to maintain a set of regulations that, if followed, improve the performance of the system. Our proposal is simulated in a traffic intersection scenario developed over Repast Simphony, where agents are traveling cars. Collisions between cars and traffic jams are the conflictive situations, and the goals of the system are to avoid these situations. Performed experiments show that our method succeeds in generating sets of norms that eradicate collisions and traffic jams when goals are non-conflicting. With conflicting goals, our approach searches for a trade-off between system goals.
AUTHORS: Roberto Navigli (Sapienza University of Rome); Paola Velardi (Sapienza University of Rome); Stefano Faralli (Sapienza University of Rome).
VIDEO DURATION: 05:02
ABSTRACT: The video presents our novel graph-based algorithm for the induction of lexical taxonomies. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions and hypernyms. This results in a very dense, cyclic and possibly disconnected hypernym graph. The algorithm then induces a taxonomy from the graph.
AUTHORS: Kayur Patel (University of Washington); Ashish Kapoor (Microsoft Research); Steven Drucker (Microsoft Research); James Fogarty (University of Washington); and Desney Tan (Microsoft Research).
VIDEO DURATION: 05:45
ABSTRACT: The video provides an overview of the Prospect system. The first part of the video is focused on the architecture of the system -- on how Prospect automates the process of exploring potential models and how it creates a table of predicted labels for each example. It then explains how we compute example-level statistics that are used later when visualizing and summarizing results from multiple models. The second part of the video focuses on the visualizer. It presents different visualizations of models. It then shows how practitioners can use Prospect to sort and filter both examples and models, and how computed statistics respond to filters. Finally, it provides an example of how practitioners can understand data using a specific visualization -- the incorrectness vs. entropy scatter plot.
16- Changing One's Mind: Erase or Rewind? Possibilistic Belief Revision with Fuzzy Argumentation based on Trust
AUTHORS: Celia da Costa Pereira (Universite de Nice Sophia Antipolis); Andrea Tettamanzi (Universita di Milano); and Serena Villata (INRIA Sophia Antipolis).
VIDEO DURATION: 05:00
ABSTRACT: In this video, we present a new framework where the acceptability of the information items depends on the trustworthiness of the sources proposing them. In such a framework, the information items are represented as arguments which are associated with a plausibility degree representing the trustworthiness of their information source. The main contribution of this work is twofold: we propose (i) a new framework for dealing with arguments that can be just partially acceptable and that are evaluated with respect to their degree of plausibility; and (ii) a new method to assign fuzzy labels to arguments --- such a method can be considered as a fuzzy extension of the crisp one which considers three-valued labeling: in, out, and undecided.
17- Learning where you are going and from whence you came: g- and h-cost learning in real-time heuristic search
AUTHORS: Nathan R. Sturtevant (University of Denver) and Vadim Bulitko (University of Alberta).
VIDEO DURATION: 03:29
ABSTRACT: Real-time agent-centered search is a research paradigm where an agent must find a path to the goal given that each planning step is constant-bounded, and that the agent can only sense in a local area around itself. f-LRTA*, unlike most previous algorithms, learns both heuristic estimates (cost to the goal) and g-cost estimates (cost from the start). This learning allows f-LRTA* to remove states from the state space which can be proven not to be on optimal paths to the goal. This video shows a sample agent solving several problems using both old and new approaches. The comparison illustrates the effectiveness of this new approach.
AUTHORS: Feng Wu (School of Computer Science and Technology); Jiongkun Xie (University of Science and Technology of China); and Xiaoping Chen (University of Science and Technology of China).
VIDEO DURATION: 05:26
ABSTRACT: We present a demo about coordination for ad hoc agent teams, where two robots work together to transfer bottles in a simulated kitchen environment. It demonstrates the performance of the OPAT algorithm with two types of ad hoc teammates, namely a teammate with a fixed policy and a teammate with a random policy. In this demo, a robot running with OPAT can learn knowledge from past interactions and adapt its behavior to coordinate with different unknown teammates. The results of fully-coordinated and self-interested settings are also given to show that coordination is essential for success in this task.
AUTHORS: Li Zhang (School of Computing, Teesside University).
VIDEO DURATION: 02:02
ABSTRACT: Our previous work developed an intelligent agent to engage in virtual drama improvisation with human users. The intelligent agent was equipped with the capabilities of affect detection from users' text input, but the detection has not taken any context into consideration. In the work presented here, we especially focus on context-based affect sensing using the modeling of speakers' improvisational mood and other participants' emotional influence to the speaking character under the improvisation of loose scenarios. We also provide the intelligent agent with the abilities of recognising a few typical metaphorical phenomena. The new developments have enabled the intelligent agent to perform generally better in affect sensing tasks. The video is about a virtual drama improvisation session contributed by a few human users and the AI agent. The improvisation is mainly about the Crohn's disease scenario. In this scenario, Peter has Crohn's disease and has the option to undergo a life-changing but dangerous surgery. He needs to discuss the pros and cons with friends and family. Janet (Mum) wants Peter to have the operation. Arnold (Dad) is not able to face the situation. Dave (the best friend) mediates the discussion. In the recorded demo, Dave is played by the AI agent, who detects the affect expressed in the users' input with the consideration of context and informs the animation engine to produce expressive gestures for user-controlled avatars. The AI agent is also capable of making appropriate responses based on the detected affect to stimulate the improvisation.