How Does It Operate?

This binary options software is extremely user-friendly and easy to operate with. HBSwiss system uses an algorithm and computer codes that are amazingly designed. It monitors the financial markets with high accuracy, and once the trader sets his personal preferences, it does all the work automatically.

John Brooke, Kevin Garwood and Carole Goble
ESNW, University of Manchester, UK


There is, as yet, no common standard for describing Grid resources. Different Grid middleware systems have had to create ad hoc methods of resource description and it is not yet known how well these can interoperate. We describe work in the Grid Interoperability Project (GRIP) that investigates the possibility of matching the resource descriptions provided by the GLUE schema and implemented in MDS-2 with the resource descriptions provided by the Abstract Job Object framework utilised by UNICORE and stored in the UNICORE Incarnation Database. From this analysis we propose methods of working towards a uniform framework for resource description across different Grid middleware systems.

Translation of Resource Descriptions

We describe here a semantic based approach to a particular problem which is becoming increasingly important in the area of establishing standards for interoperability in Grid middleware systems. The problem is now becoming more urgent as Grids are developed for production usage. Much work has been done in setting up Grids for particular purposes, e.g the various Particle Physics DataGrids [1], Grids on heterogeneous architectures [2,3],Grids for running Application Services[3]. Such Grids have all had to face the key problem of how to describe the resources available on their Grids so as to enable higher-level functions, e.g resource brokers, to discover resources on behalf of their clients. Furthermore such brokers may wish to delegate requests for resources between themselves, in order to facilitate Grid economies. Now within any Grid or Virtual Organisation there is often a great deal of implied knowledge. In the European Data Grid, for example, Virtual Organisations are created around particular experiments and it is possible to prescribe very precisely the type of hardware and software to be used. Since this is known and defined as part of the VO, it is possible to use this knowledge implicitly in writing workflows and jobs which have to be brokered. On the other hand, on a Grid uniting several organisations in a VO where the hardware and software may be heterogeneous, it is not possible to rely on assumptions such as software version numbers, performance of applications, policies and location of different parts of file systems (temporary storage, staging areas etc..). In this latter case such knowledge has to be made explicit at the local rather than VO level and must be interrogated by brokers and other high level agents.

The work described was developed in the Grid Interoperability Project (GRIP) [4] to create a broker which could interrogate on behalf of its clients two different resource schemas. One schema is the GLUE [5] schema used to provide a uniform description of resources on the Data Grids being developed in the US and Europe and to enable federation of VO in those projects for global analysis of data from particle physics experiments. The other schema is provided by the UNICORE framework [5], in particular the software model used to create local Incarnation Data Base entries, used to `ground’ or `incarnate’ Abstract Job Objects, which are sent around the Grid as serialised Java objects. The motivation for this second method of describing resources came from the needs of users on Grids with highly heterogeneous architectures running many different types of applications from legacy commercial applications available only in binary format, to sophisticated coupled models incorporating specialist machines (eg visual supercomputers). The motivation for providing interoperability between these two systems is that both are widely deployed in Europe (now also in the Asia-Pacific region) and federating Grids on a European level will ultimately face the problem of interoperability between them.

Resource Broker and Resource Requestor Spaces.

An important discovery in the preparatory stage of developing the interoperabilty service was that there exist two important sources of resource description in a Virtual Organization. One comes from the semantic spaces implicit in the Resource Requestors (RR) e.g the clients and brokers looking for resources on behalf of their clients. The other comes from the semantic spaces implicit in the the Resource Providers (RP), the information services and protocols advertising resources in the VO. Any functioning broker must provide a mapping from RR to RP space since this is its primary function, to find the resources that match a users request for resource consumption. There can be a one-to-many mapping from the space of RR to the space of RP since each resource has its own RP space. In more sophisticated scenarios the RP space can recursively cast itself as an object in the RR space by passing on the resource request onwards as if it had become a Resource Requestor. See Figure 1 for an illustration of this process. In the case of collaborative working (e.g Access Grid), however, there is a many-to-many model and the interactions of the RR and RP spaces are highly dynamic. The Grid abstraction is particularly useful for examining such complex usage patterns since it allows each physical resource in the Grid to be used in either an RR or RP context. Thus RR and RP spaces come naturally from the consideration of the implications of resource sharing in a Virtual Organization. A well-constructed Grid resource description schema such as GLUE or the UNICORE IDB, ensures that all of the resources and sites joining the VO can describe themselves in some uniform manner, which should be as complete as possible.

Designing an interoperable resource broker with a translator module as outlined above is a test in quite a deep sense of the completeness of the resource description schema of either system, if there are terms which are completely untranslatable between the two systems then at least one cannot be described as complete. A semantic approach can help to decide whether terms in either system really are untranslatable or whether they can be accomodated by a natural extension of either schema which does not break the coherence that the Grid relies on.

Translation Service for Resource Interoperability

In this paper we present a case study which provides a means of researching the general problem of interoperability outlined in Section 1. In the EUROGRID project [xx], a broker was developed which could take workflows described in the UNICORE AJO framework and brokered the sites on EUROGRID to received offers from those sites which could enact the workflow and provided mechanisms for this sites to return tickets describing the QoS policy they would offer. We wished to extend this broker to allow it to query the information publishing mechanisms in MDS-2 as well as the UNICORE mechanisms. The architecture is shown in Figure 1. The key components are the Network Job Supervisor which receives the AJO from the broker (our total brokering architecture is multi-tiered [xx}). In a pure UNICORE world the incarnation database is invoked to provide the translation between RR (the AJO) and RP (the detailed description of the resource). To ground the request on a resource or even a whole Grid using MDS-2, we need a translator to ground the AJO abstractions. This is a translation from an RR described in UNICORE terms to an RR described in MDS-2 terms and then Globus does the grounding within its own mechanisms. Note that some schema must be present since this is a translation between two spaces where resource is virtualised, or described in abstract terms, we chose GLUE because of its prevalence and importance in major Grid projects.

A working protoype was produced early in 2003. In this prototype the translator is limited to the terms in the selected AJOs and has to be augmented by hand as the range of AJOs is increased. This is essentially because the Ontology Engine part of the architecture does not exist. This neatly defined the next part of the work, to extract an ontology of resource description from UNICORE and GLUE separately and initially considering mapping of terms in a rapid protyping phase. Note that this implies a back reaction on each ontology since semantics should come to light in the translation process which are implicit in each systems considered separately. This is indeed what was brought to light in the next phase.

Constructing and Mapping the Ontolgies

To the best of our knowledge, this transation approach has not been attempted before in a Grid context. We therefore adopted the following procedure. Since the UNICORE abstractions are expressed in a hierarchy of Java classes we could extract an initial ontology from the JavaDocs that encapsulates the semantics in the classes and inheritance tree. We then applied the same approach to the documentation provided by the GLUE schema.
Figure 1: The architecture of the resource broker with a translator mappings from a UNICORE AJO to an LDAP search of the MDS-2 information services

We needed a knowledge capture tool to construct the initial ontologies . We investigated the possibility of using a full Ontology Editor such as DAML-OIL. However this would have involved too much complexity for the scope of the GRIP project. We expected that the ontologies would change rapidly as we started to look at the process of mapping. Also, we considered that we needed to supplement what was in the documentation with the implicit knowledge of the developers which would have to be extracted via structured interviews. We decided to use the PCPack tool from Epistemics Ltd. It allowed us to rapidly compose the ontology, to express the provenance of the terms that we employ and to capture the mappings that we make in XML format. As these mappings are adjusted by the tools graphical interface, the XML is automatically updated. The XML files derived from the knowledge capture process will be used by the Ontology Engine in Figure 1. We show a part of the UNICORE AJO structure captured by PCPack in Figure 2
Figure 2: Part of the UNICORE Ontology derived via PCPack

When we came to examine the GLUE schema we found a fundamental difference of philosophy to the UNICORE approach. GLUE models the physical resources available and their dynamic capabilities (loading etc). This dynamic information is not currently provided in the UNICORE IDB, it would have to be requested by the resource broker by launching an AJO that queried the status of the queues for example. On the other hand the GLUE schema we examined does not have a description of Software resource which may be required in a Grid where many differen The intersection of the UNICORE resource description universe with the GLUE resource description universe is represented in very schematic form in Figure 3.
Figure 3: Diagram showing the intersection of the UNICORE and GLUE resource domains

We now show how we have derived an ontology from the GLUE schema. Figure 4 shows a screen dump from PCPack with the GLUE hierarchy.
Figure 4: The Glue Ontology in PCPack

We then added provenance for each of the leaves in the tree. This shows how we can justify our semantic definitions in terms of the original GLUE schema.
Figure 5: The provenance of one of the terms in the GLUE ontology. This refers to a dynamic resource, SMPLoad, that does not currently exist in UNICORE.

We are now read to describe the translator tool which will map between the ontologies (next section).

Onology Mapping between UNICORE and GLUE

PCPack is able to output XML documents that encapsulate the knowledge in the UNICORE and GLUE ontologies. We used this XML for input into a translator program. We have just recently managed to incorporate this into the architecture shown in Figure 1. We show a snapshot of the running program in Figure 6. Using this tool we can rapidly explore mappings between the ontologies and encapsulate these as XML documents. These can then be used by the ontologically driven translator service shown in Figure 1.

Since the intention is to move the information in the UNICORE IDB and UUDB to XML format, we can see that the architecture is becoming consistent. The ontology also exists in XML and therefore can be integrated in the whole incarnation process. Thus the fundamental basis of the UNICORE architecture is respected and the great flexibility and extensibility of the incarnation process is revealed.

We now intend to use the tool developed in Figure 5 for discussions with the GLUE and UNICORE developers. The Grid Resource Ontology would inform the development of a genuinely semantic Grid Resource Description Language.
Figure 6: The translator service. On the left is the UNICORE ontology terms, on the right the GLUE ontology and the translation workflow is in between.

Future work

We need to further develop the tools outlined in Section 5. The creation of this service is an important driver for a future Semantic Grid. We would estimate that the development of an agreed Grid Resource Ontology will take 18 months to 2 years. However by demonstrating this partial ontological mapping we have at least moved the process out of the realm of theory into practice.

The development of a Grid Resource Ontology will transform services such as resource brokerage and resource discovery. For scalable and dynamic virtual organisations these are essential tools. At present we can only achieve any sort of Grid scaling by imposing homogeneity of resources on a VO. Thus in the EU DataGrid and Cern LCG (probably the largest production grids currently operating) the VO policy in terms of operating system, versioning of Grid software, hardware deployment and architecture is prescribed. This is a valuable experiment in scaling but it is not the future of Grid Computing, whose particular domain is the seamless integration of heterogeneous resources. We believe that our fledging semantic translation service is a small but vital step on this road.


[1] e.g. Datagrid,

[2] e.g. UK Level 2 Grid, information available from the UK National eCentre,


[4] Grid Interoperability Project,

[5] Condor – tools for high-throughput computing,