Autonomous systems: From labs to lives

bei

 / 14. December. 2021

Prof. Michael Winikoff im Gespräch


 

Prof. Michael Winikoff is a professor at the School of Information Management at Victoria University of Wellington. Michael’s research has focussed on software that is conceptualised in terms of "intelligent agents" which are able to exhibit robust and flexible behaviour. He has worked since 1999 on developing approaches for engineering these sorts of systems. More recently he has been looking at explainable AI, societal consequences of autonomous systems, and the issues that affect trust in these systems. He is also an Associate Editor for the Journal of Autonomous Agents and Multi-Agent Systems, Editor-in-Chief for the International Journal of Agent-Oriented Software Engineering, and he has been both programme and general chair of the International Conference on Autonomous Agents and MultiAgent Systems.


 

Prof. Michael Winikoff is an expert in Agent-Oriented Software Engineering. In an interview with the DIGITALE WELT Magazin he allows us a glimpse into his prolific research. He explains the challenges in the field as well as the responsibilities towards the consumers in regard to autonomous systems

You are not just a scientist, but also a pianist and composer. Did music teach you something about software?

Hmm. That’s a good question. I can see a range of things that music and software have in common, but I don’t think I’ve learned anything specific about software from music. More broadly, music, software, and mathematics, all share some common ideas about having rules, and structures, and are all concerned with abstract (non-physical) creations. Creating a new piece of music, writing software, or proving a theorem, all have in common working within a formal framework to create something new, and with all three the concept of elegance is relevant. Of course there are also differences: the goal of music is to communicate, in particular emotions, whereas the goal of software is to perform a specified function.

You made research on the factors which enable humans to trust autonomous systems.  Please describe them briefly. 

Well, the first thing I should say is that the list of factors is not meant to be a definitive list that is always applicable, and complete. Rather, these are some factors that are important, but there are so many different domains in which autonomous systems can be used, that no list is going to be universal. Some domains might have additional factors, and for some domains not all common factors would be relevant.

In any case, the factors that I discuss in my work are: recourse, explanation, verification & validation, and incorporating human values.

Recourse is the idea that there has to be a way to deal with the consequences of an autonomous system doing the wrong thing. If, say, a self-driving car crashes into your fence, how do you get compensated for this? This is obviously primarily a legal and social question, but it is crucially important, because no technology is perfectly able to function in all situations.

For a wide range of domains it can be important that autonomous systems have the ability to explain the reasoning that led to a particular action, or course of action, being performed. This is important because sometimes an autonomous system will behave in a way that might be correct, but not obviously correct. A simple example is when a GPS takes you on an unusual (and longer) path. If you don’t realise that there has been a traffic accident on the usual route, you might not understand why the GPS is doing something unusual, and your trust in it might reduce. Of course, explanations need to be given in a form that is comprehensible.

Autonomous systems, especially if they operate in safety-critical domains, need to be able to guarantee that they will never do certain things. For example, a robotic medicine dispenser needs to be able to guarantee that it will deliver the correct medicines at the correct time to the correct patient. Traditionally, we assess software by testing it in a range of scenarios. However, when there are many many possible situations that can be encountered, testing becomes infeasible, and there is an important role for formal verification techniques.

Finally, autonomous systems will often function in the context of human society, and to make good decisions, and act in an appropriate way, it can sometimes be important for the systems to have representations of human values, and to be able to reason about them and take them into account. For example, in what situations can a personal assistant share a person’s location? And with whom? Making this sort of decision requires understanding of privacy, and how it trades off against other important values (such as safety). It also requires awareness of social relationships: sharing a child’s location with their parent is different to sharing their location with their friends, or with their teachers.

In your article from “The Conversation”, you argue that many decisions which autonomous systems take should be based on human values. However, who decides on those values, since they can differ from culture to culture?

Indeed. And they are also clearly not uniform even within a single culture (however one might define culture!). What technology can aim to provide, is a framework that can be instantiated with different values and priorities. For example, a personal assistant might have the ability to reason about privacy when sharing location information. This could be instantiated with different rules for different contexts. This customisation could be done to some extent at a country or region level (for example, operating with the EU has certain implications for privacy rights), but also by individuals. How to represent these values in a way that allows effective customisation is still a research challenge.

In your opinion: Does the industry do enough to help clients have trust in autonomous systems?

I don’t want to be negative, but I would say that there is more to be done by many stakeholders, not just industry. Industry certainly has a crucial role to play. But there is a tension between rushing to develop and deploy certain technologies, and taking care to ensure that the technologies are fit-for-purpose. This is why there is also a crucial role for regulators and governments: we cannot leave this to industry.

You are best known for your work on design methodologies for agent-based systems, foremost among them, the Prometheus methodology. Prometheus is a detailed and complete (start to end) methodology for developing intelligent agents. Could you give an example of its use? And can you briefly describe the methodology?

Sure. It’s been used for a range of multi-agent systems in the literature. Without going to the literature and searching, I’ll mention just a couple of smaller examples: a book store realised as a multi-agent system [MAS] (running example in Lin and my 2004 book), and a meeting scheduling system, I can also recall seeing work on a UAV design.

The Prometheus methodology provides concepts, a process, and notations for designing multi-agent systems. It also, crucially, provides detail on how to do various things, for example, if part of the process is to identify the agent types in the system, it’s important that a designer (especially one not already experienced in designing MAS) has good guidance on how to identify agent types, and what are the trade-offs involved.

Prometheus consists of three phases (although of course they are not done in a strict linear sequence): specifying the system-to-be in terms of its goals and the environment it interacts with, designing the system by defining the agent types, and how they interact (using interaction protocols), and doing a detailed design for each agent type. This last step, detailed design, is where Prometheus assumes, for concreteness, that goal-plan agents (also known as “BDI” agents) are used, but the other parts of the methodology do not assume this.

Which issues are problematic when it comes to testing multi-agent systems? And why is it especially difficult to test Belief-Desire-Intention (BDI) agents?

There are a number of issues that combine to make testing multi-agent systems very difficult. Firstly, MAS are by definition parallel distributed systems. Secondly, such systems often have to deal with challenging environments that are complex, non-deterministic, and where things can go wrong, and the systems are expected to be able to recover from such failures. And thirdly, the reasoning mechanism that cognitive architectures use, such as the BDI model, can be powerful, but also can make testing a challenge. Specifically, in some work (with Professor Cranefield) we analysed the BDI model, and showed that even relatively small BDI goal-plan trees can give rise to enormous numbers of possible execution traces, which makes assurance through traditional testing infeasible.

How is intention defined in BDI agents?

Well, it depends on your perspective.

Seriously, one of the things that are confusing about the BDI model is that in the literature there are a number of different perspectives. There’s the original philosophical work on folk-psychology by Michael Bratman, which is not about software at all. Then there is work on formalising concepts using modal logic, and then there is work about software architectures and languages.

Focusing on the software languages perspective, an intention can be thought of as a partially instantiated plan that the agent is intending to carry out. For example, if an agent has a goal, and it has instantiated a plan to realise the goal, then the parts of the plan instance that have not yet been done are the intention.

In practice, when creating BDI agents, the programmer focuses on writing plans, not on intentions. Intentions are not the focus, since they basically are a run-time structure that results from running the plans.

Why is it important that agents generate protocols at runtime?

Well, I would say that it isn’t important in general, but that in some situations it becomes important. Having predefined protocols that are created at design-time is simpler. But in some situations, runtime protocols are needed. For example, if one has a system where new agents can join the system (an “open system”), then it may be necessary to allow for protocols to be created at runtime.

What makes debugging cognitive agent programs difficult?

The execution cycle of cognitive agent programs can be difficult to follow. For example, if a particular action was performed, trying to work out why can involve tracing back through various plans, the conditions that were true when those plans were selected, and even failures of earlier actions that resulted in other plans being attempted.

In one of your papers, you argue that it is time to begin the development of a next-generation agent-oriented software engineering (AOSE) methodology, leading ultimately towards a unified AOSE methodology. Why is a unified AOSE methodology of such vital importance?

There’s an analogy with object-oriented design: in the early days of OO design there were many methodologies. This diversity is unavoidable in the early days of developing new methodologies, but it poses challenges. Firstly, a practitioner needs to select an appropriate methodology amongst the many available (which is difficult, since they need to have some familiarity with the different options). Secondly, tool support becomes difficult to provide, since with many methodologies, there need to be many tools, requires much more effort across the community. And, of course, having multiple notations hinders communication and education.

In the OO world, as we know, key people got together, and ultimately the Unified Modeling Language (UML) was created. This provided a single common notation that all designers could learn, and that different tools could support.

In the agent world this has not yet happened: there are still many (dozens of) methodologies, although only a few are well developed, and have seen substantial use.

Will traditional AI planning systems and procedural reasoning system (PRS) coexist next to one another in the future? Or will PRS replace traditional AI planning systems? 

I think they will co-exist, and in fact there has been work on how to better integrate the two. There are two differences between traditional AI planning systems and BDI systems such as PRS. Firstly, traditional AI planning searches to find a complete plan, and it is then executed. On the other hand, situated systems need to interleave planning and acting. Secondly, traditional AI planning involves assembling individual actions into plans based on each action’s pre and post conditions, whereas PRS uses human-crafted recipes for combining actions. It is worth noting that HTN planning (Hierarchical Task Network) sits in between: it uses human-crafted recipes, but does lookahead planning.

Are there any alternatives to AgentSpeak as far as agent-oriented programming languages are concerned?

Yes, there are dozens of BDI agent-oriented programming languages. AgentSpeak was proposed by Anand Rao (back in the mid 90s) as an abstraction of previous notations (PRS, dMARS). It was subsequently implemented by a few people, with the most influential, and widely-used, implementation being Jason (subsequently integrated into JaCaMo (Jason+CArtAgO+Moise)). BDI languages other than AgentSpeak include JACK, Jadex, 2APL, Gwendolen, and also one could argue GOAL. But this is far from a complete list!

You are a professor of the School of Information Management at the Victoria University of Wellington. What do students struggle with most, when it comes to understanding autonomous systems?

I have to say that my experience in teaching autonomous systems has been primarily at RMIT University, where Professor Lin Padgham and I had an undergraduate course that taught both agent programming and agent design. At Otago University (where I was until earlier this year), there was less opportunity to teach autonomous systems.

I would have to say that my experience was positive: students (second year undergraduate students) did not really struggle. They were able to understand the concept of autonomous systems, and design and build simple systems in the course of a single semester.

As a professor: Should computer science students start with learning agent-oriented programming or object-oriented programming first? Or should they learn both simultaneously?

I would say that they should learn OO first. Object-oriented programming is more general: not everything is an agent. Of course, there is also a debate about whether objects should be learned first, or procedural programming …

There is a lot of talk about self-driving cars. However, is there a less broadly discussed use of applied autonomous systems which will be of importance in the future?

Self-driving cars are a “flashy” technology, and also one that we can all relate to. However, it is unfortunate that so much attention is focussed on them, because they have some quite particular characteristics, which are not shared by other autonomous systems. For example, being safety critical yet operating in an incredibly challenging environment.

In terms of some other applications of autonomous systems that merit attention I would mention drones, a wide range of robots, and, moving away from physically embodied systems, smart grids, smart homes, and digital personal assistants.

Finally, is there anything else you’d like to mention?

In the interests of keeping this from being too long I’ll just mention one thing.

The development and deployment of AI and of Autonomous Systems obviously poses many challenges for societies. The key questions that we need to grapple with include: what applications are acceptable, and how should we respond to developments?

Let me give a few quick examples. Facial recognition is an example of a technology where acceptability is currently being debated, and some places have banned it. What forms of use of facial recognition should be allowed and accepted (in a given social context)?

Another example is Lethal Autonomous Weapon Systems, where there is a strong argument for having a pre-emptive ban.

An example of a change that requires response is the workforce and the economy. To the extent that automation will result in substantial changes to the workforce, how should society respond to that? What makes such debates difficult is that there are things we just do not know, and cannot predict. We know that automation will transform some jobs (potentially changing the demand), and that it will eliminate some jobs, while creating others. What we do not know are the numbers and patterns.


Interview geführt durch:

DIGITALE WELT Magazin Redaktion