Robotic assistants are being designed to help, or work with humans, in a variety of situations from assistance within domestic situations, through medical care, to within industrial settings. Whilst robots have been used in industry for some time they are often limited in terms of their range of movement or range of tasks. A new generation of robotic assistants have more freedom to move, and are able to autonomously make decisions and decide between alternatives. For people to adopt such robots they will have to be shown to be both safe and trustworthy. In these lectures we discuss our experiences in applying formal verification to a robotic assistant located in a typical domestic environment. We focus on the Care-O-bot located in the Robot House at the University of Hertfordshire. This work was carried out as part of the EPSRC funded Trustworthy Robotic Systems Project between the University of Liverpool, the University of Hertfordshire and Bristol Robotics Lab.
This session is focused on practical techniques for the verification and validation of autonomous systems from specification via design to the code level. Because no single technique is adequate to cover a whole system in practice, a variety of verification techniques, including formal methods, such as model checking and theorem proving, and state-of-the-art simulation-based methods, will be presented from first principles and brought into the context of robotics. How these techniques can be applied will be illustrated using real-world examples. We will start with the verification of control system designs with respect to high-level requirements, such as stability, using an assertion-based approach that combines simulation-based verification with automatic theorem proving. We will then focus on using model-based techniques to enhance test generation in the context of coverage-driven verification applied to code used in robots that directly interact with humans. The session will finish with the proposal to design autonomous systems "for verification"; this requires a paradigm shift towards more systematic design practices.
The research presented in this session is based on collaborations within the EPSRC funded projects "Robust Integrated Verification of Autonomous Systems" and "Trustworthy Robotic Assistants". This session will be supplemented with hands-on demonstrations presented by Dejanira Araiza Illan.
As the use of autonomous robotic systems spreads, the need for their activities to not only be understandable and explainable, but even verifiable, is increasing. While the idea of autonomy is appealing and powerful, actually developing autonomous systems to be reliable is far from simple. In particular, how can we be sure what such a system will decide to do, and can we really formally guarantee this behaviour? So, an important aspect is to be able to verify the truly autonomous decision-making that forms the core of many contemporary systems. In this course, we describe a particular approach to the formal verification of decision making in agent-based autonomous systems. This will incorporate material on practical autonomous systems, agent programming languages, formal verification, agent model-checking, and the practical analysis of autonomous systems. [ This material was developed jointly with Louise Dennis ]
02.00-02.45 Introduction to model checking of engineering systems
03.00-03.45 Verification of Control Systems on UAS
03.45-04.00 Coffee break
04.00-04.45 Verification of runtime reasoning on robots
05.00-05.45 Pursuit evasion strategies by model checking
It is clear that many of the application domains for advanced robotics currently targeted by the robotics research community will be those where the barrier between them and the, often highly unstructured, rest of the world will be torn down. Development of a certification process for such devices will be imperative. It follows that such a critical part of this certification process will be a safety-case based argument and that, in turn, this will have to be backed up by verification and validation of the correct behaviour of such robots acting in these settings. In this session, we will discuss these issues in some depth, considering not only the technology challenges, but also in the context of critical issues for acceptability, as well as new financial, legal, regulatory and insurance structures that may be required. The discussion will be set in the context of an upcoming revolution in, what this presenter would consider to be robotics; the emergence of autonomous vehicles on our roads.
This lecture will be presented in two parts
Part 1 From robot swarms... Swarm robotics is a new approach to multi-robot systems in which control is completely decentralised and distributed. Inspired by the self-organising collective behaviours of social insects each robot has a simple set of behaviours with local sensing and communications. The overall desired system behaviours emerge from the local interactions between robots, and between robots and their environment. The promise of swarm robotics is that it offers the potential of highly adaptable, scalable and resilient systems, able to tolerate the failure of many individual robots without loss of overall system (i.e. swarm) functionality. Significant engineering challenges, however, are firstly how to design the emergent, self-organising properties of the swarm, and secondly how to assure – and ideally verify – safe and dependable operation. Here I will briefly (1) introduce swarm robotics, illustrating the state-of-the-art with respect to current research, both within the Bristol Robotics Lab and elsewhere. (2) Using adaptive swarm foraging as a case study, we will address the engineering challenges of mathematical modelling and optimization. Then (3) we will analyse the robustness and scalability of a swarm robotic system by developing a model of its reliability.
Part 2 ...to ethical robots If robots are to be trusted, especially when interacting with humans, then they will need to be more than just safe. In this talk I will outline the potential of robots capable of modelling and therefore predicting the consequences of both their own actions and the actions of other dynamic actors in their environment. I will show that with the addition of an ‘ethical’ action selection mechanism a robot can sometimes choose actions that compromise its own safety in order to prevent a second robot from coming to harm. An implementation with both e-puck and NAO mobile robots provides a proof of principle by showing that a robot can, in real time, model and act upon the consequences of both its own and another robot’s actions. I argue that this work moves us towards robots that are ethical, as well as safe. I will conclude by discussing the challenges of verification and validation of an ethical robot.
Department of Computer Science
Deramore Lane, University of York, Heslington, York, YO10 5GH, UK
Tel: 01904 325500 | Fax: 01904 325599