semantic web service composition for action planning
1. Shahab Mokarizadeh , Alberto Grosso, Mihhail Matskin, Peep Kungas, Abdul Haseeb Royal Institute of Technology (KTH), Sweden [email_address] Using Semantic Web Service Composition for Action Planning in Multi Robot System
11. Action Planning Architecture WSDLs Composite Service Graph Problem Decomposition Service Composition Engine Workflow Engine Perform High Level Task(input, output ) Problem Ontology Task Allocation Service to be allocated Communication Layer
12. Service B Service A Problem Ontology Problem Decomposition Problem Decomposition Layer ServiceA.wsdl ServiceB.wsdl ServiceC.wsdl ServiceD.wsdl ServiceB.wsdl ServiceA.wsdl
15. Service Layers in Problem Ontology OWL OWL + WSDL WSDL hasWSDL isa isa isa MeasureTempertureOfRoom hasOutputParam hasInputParam hasInputParam MeasureTemperature Location Temperature MeasureTemperatureOfRoom Room RoomTemperature 1- Conceptual Service Description 2- Concrete Service Description MeasureTempertureOfRoom2 hasOutputParam MeasureTempertureOfRoom1 3- Contextual Service Description
21. Workflow Layer Graph of the Plan BPEL Script Deploy Translate BPEL Engine Server Side Service Robot Service Robot Service
22.
23.
24. Summary Semantic web service based architecture for : 1- Handling high level task planning for robotic system 2- Filling the gap between physical robotic systems and logical information systems on the web.
Hello and Good morning, My name is Shahab Mokarizadeh, I am presenting usage of semantic web service composition….
In this presentation, first of all I will give you a brief introduction to the Multi-robot systems in Internet of things environment, how they interact, what is the intention of the interaction, Then I will reveal the raising challenges and problems . I will explain motivation for the proposed architecture as a solution. Next, I will go through the layers of the architecture, I have still time, I will show the movie form the experiment made based on this architecture. Finally, I will summarize the work.
“ Internet of things” , I think the terms are known for everyone, and it doesn’t need so much explanation. “ a world-wide network of interconnected objects uniquely addressable, based on standard communication protocols “ So briefly, the idea is to have an environment providing access to information services anytime , anywhere and on any device. For providing identity for objects in such environment, we label objects with RFID tags.
In such environment, we have a operating multirobot system. Multi robots are group of robots interacting with each other . The idea is that (Multiple) robots act more efficiently than a single robot if the mission could be divided across a number of robots operating in parallel. There is a network of communicating robots among themselves and with computing other devices. Robots are mobile. RFID tags are used as a medium to pass information among robots in the absence of Internet conncetion. RFID tags connect objects in the environment to Internet of things . Applications can combine local and remote services.
The idea is to have architecture to realize end-user (human) requests through available services (information) provided in such environment In other words, we need to combine both internal and external services seamlessly.
And as you might have already predicted, we have to resolve Resolving Interoperability Overcoming Heterohenity : Integration of Heterogeneous Robots .” Heterogenity in Robot Capabilities Heterogenity in Robot Application System Heterogenity is due to difference in robot operating systems, programming languages, software and hardware vendors, legacy technologies,... Heterogenity in Communication nter robot communication and communication with other server side computing devices (communication protocols) Roots with WiFi communication capability, RFID capability,.... Different robots sfotwares,.....
And not surprisingly ,One solution could be Web Technology standards We would like to make web resources (specifically web services) available for robot and also expose robot capabilities into the web, we need to rely on some standards to resolve the interoperability and homogeneity issues, so web technologies are selected. SOAP and WSDL provide web service standard which is an open and widely used standard. Such solution promote intelligent and seamless integration of services.
Let’s go back to Robot worlds- Robot Control System : is the brain of the robot, controlling behavior and knowledge of the robot. It consists of two parts: To process incoming request, make a plan, assign the tasks in the plan to robots, collect the result,… Choosing server-side coordination mechanism is a direct consequence of centralized ontology repository.
So far, we have robot services as web services
Goal of ROBOSWARM project: Develop an “ Open Knowledge Environment ” for self-configurable, low-cost and robust “ Robot Swarms ” usable in everyday applications. Swarm Characteristics: Overall control of robot action is not embedded into any of the robots. Local behavior of each robot is loosely dependent on the behavior of other robots Local interactions among robots leads to emergent of a complex behavior.
Here is our proposed architecture for Action Palnning. It consists of four major layers which I will expalin them in detail in next slides. But I just want to give you a short view of what is going of this architecture: The top most layer is problem decomposition layer which is responsible for discovery of potential services that could satisfy user request (either individually or in a composition). Then those discovered services are passed to Composition engine to find a composition of he given WSDLs satisfying end user requests. Then we have ”Workflow layer” responsible for orchestration of the services in the composition. Next, is Task Allocation layer which is responsible to assign the tasks(robotic services) to suitable robots based on some QoS, other criteria.
Goal : Facilitate discovery of potential services in the domain of the problem through problem ontology.
” problem ontology” is a centralized KB containing knowledeg of robot environment, robots capabilities, other available external services in th server side. The approach for decomposition is that we associate the concepts in the ontology to the semantic representation of services which are affecting those concepts or afected by.
Problem ontology consistes of three layers for definition of services in the domain of the problem 1-Conceptual Service Descriptions : Abstract service definitions of common domain services , categorized according to some taxonomy standards . 2-Concrete Services Descriptions : Concrete (real world) instances of conceptual services , bounded to specific WSDL descriptions., but not executable (no end point) 3- Contextual Service Descriptions : Contextual WSDL interfaces with no end-point address in which parameters (schema elements) and operation names are adjusted to reflect the context of service . Hence, in order to be able to use a service in constructed plan several times with different input and output sets (different context), we generate multiple instances of the same discovered concrete service.
The top most layer is problem decomposition layer which is responsible for discovery of potential services that could satisfy user request (either individually or in a composition). To do so, we introdeuced ”problem ontology” which is a centralized KB containing knowledeg of robot environment, actions robots capable to prefrom, available services in th server side. The approach for decomposition is that we associate the concepts, which are affected by robot actions or server side services to the ontological representation of services WSDLs.
There are Web services available that can calculate the comfort level of a room /building given the temperature and humidity of the room/building .
In the LL composition engine, the input and output of utilized services are considered as consumed and generated resource(s). Hence, in order to be able to use a service in constructed plan several times with different input and output sets (different context), we generate multiple instances of the same discovered concrete service. LL is a refinement of classical logic which provides means for keeping track of resources. The resource-conscious nature of LL makes it possible to distinguish resources, count or even update them dynamically .
Execution and orchestration of both local (robot) tasks and remote tasks. Goal : Orchestration, execution and monitoring of individual services in the generated plan. Input : Directed graph of composite service (plan) Output : The result (output) of plan execution to be reflected to the user. and end-points of robots that are discovered by task allocation layer.
Effective task assignment way , to reflect to both environment changes (e. g. addition of new environment areas) and robots’ team changes (e. g. robot failures). Input : Service (task) definition (a robot service) Output: the identification (end point) of a the robot of the swarm performing the service. Selection of the robot is managed though an ”auctioning” mechinsm( first-price auction mechanisms).
The server-side coordination system receives a complex task and decompose it into elementary tasks and define a workflow for executing them When required by the workflow, the server-side coordination system allocates the simple tasks to robots through an auction systems The tasks results are collected by the server and sent to the human operator The human operator by means of a smart phone send to the server the task and monitor the system The operator asks for cleaning the center corridor The coordination system receives the task and automatically decomposes it into three tasks one ofr each subzone of the corridors and prepare an execution plan Then the coordination system starts the excetution plan by allocating the task to the robots of the swarm The allocation is managed by the server and the robots compete in an auction for acquiring the tasks Once a task is allocated to a robot it starts executing