When using i-views, databases work the way people think: simple, agile and flexible. That is why in i-views many things are different than relational databases: we do not work with tables and keys, but with objects and the relationships between them. Modelling of the data is visual and oriented towards examples so that we can also share it with users from the specialist departments.

With i-views we do not set-up pure data storage but intelligent data networks which already contain a lot of business logic and with which the behaviour of our application may, to a large extent, be defined. To this end we use inheritance, mechanisms for conclusions and for the definition of views, along with a multitude of search processes which i-views has to offer.

Our central tool is the knowledge builder, one of the core components of i-views. Using the knowledge builder we can:

  • define the scheme but also establish examples and, above all, visualise
  • define imports and mappings from a data source
  • phrase requests, traverse networked data, process strings and calculate proximities
  • define rights, triggers and views

All these functions are the subject of this documentation. One continuous example is a semantic network surrounding music, bands, songs, etc.

The basic components of modelling within i-views are:

  •     specific objects
  •     relationships
  •     attributes
  •     types of objects
  •     types of relationships
  •     types of attributes

Examples for specific objects are John Lennon, the Beatles, Liverpool, the concert in Litherland Town Hall, the football world cup in Mexico in 1970, the leaning tower of Pisa, etc.:


We can link these specific objects together through relationships: "John Lennon is a member of the Beatles", "The Beatles perform a concert in Litherland Town Hall".


Additionally, we have introduced four types here: specific objects always have a type, e.g. the type of persons, type of the cities, the events or the bands – types which you may freely define in your data model.

The main window of i-views: on the left-hand side the types of objects, on the right-hand side the respective, specific objects – here we can also see that the types of the i-views networks are within a hierarchy. You will find out more about the type of hierarchy in the next paragraph.

Even the relationships have different types: between John Lennon and the Beatles there is the relationship "is member of"; between the Beatles and their concert the relationship could be called "performed at" – if we want to generalise more, "participates in" is perhaps a more practical type of relationship.


The same applies for attributes: in the case of a person these may be the name or the date of birth. Specific persons (objects of the type 'person') may then have name, date of birth, place of birth, address, colour of eyes, etc. Events may have a location and a time span. Attributes and relations are always defined with the object itself.

We can finely or less finely divide types of objects: we can put the football world cup in 1970 into the same basket as all the other events (the book fair in 2015, the Woodstock festival, etc.), then we only have one type called "event" or we differentiate between sport events, fairs, exhibitions, music events, etc. Of course, we can divide all these types of events even finer: sport events may, for example, be differentiated by the types of sports (a football match, a basket ball match, a bike race, a boxing match).

In this manner we obtain a hierarchy of supertypes and subtypes:

The hierarchy is transitive: when we ask i-views about all events, not only all specific objects are shown which are of type event, but also all sports events and all bike races, boxing matches and football matches. Hence, since the type "boxing match" is not only a subtype of "sport event", i-views will reject a direct supertype / subtype relationship between event and boxing match – with a note that this connection is already known.

The hierarchical structure does not necessarily have to have the structure of a tree – a type of object may also have several upper types. However, an object may only have one type of object.

If we then wish to join the aspects of a concert and major event we cannot do this in the specific concert with Paul McCartney because we need the type of object "stadium concert" in order to do this:


Type hierarchy with multiple inheritance

The affiliation of specific objects with a type of object is also expressed as a relation in i-views and may as such be queried:

When do we differentiate between types at all? Types do not only differ in icon and colour – their properties are also defined in the types and when queried, the types can also easily be filtered. The inheritance plays a major role in all these questions: properties are inherited, icons and colours are inherited and when, in a query, we say that we wish to see events, all objects of the subtypes are also shown in the results.

Inheritance makes it possible to define types of relations (and types of attributes) further up in the hierarchy of the object type and hence use them for different types of objects (e.g. for bands and other organisations.

Creating specific objects

Specific objects (in the knowledge builder they are called "instances") may be created everywhere within the knowledge builder where types of objects can be seen. Based on the types of objects, objects can be newly created via the context menus.

An object can be created by means of the button "new" and using the named entered

In the main window below the header there is the list of specific objects already available. In order that objects cannot inadvertently be created twice, the name of the object can be keyed into the search button in the header. The search does not, by default, differentiate between upper and lower case and the search term may be cut off left and right (supplement by placeholders "*" and "?"):



Editing objects

After entering and confirming the name of the object, further details for the object created may be keyed into the editor. The object may be assigned attributes, relations and extensions by using the respective buttons.

When editing an object we can, in addition to linking it to another object, also generate the target of the link if the object does not already exist.

For example, members of a music band are documented completely. Via the relation, we want to link the member Ringo Starr with the object "The Beatles". If it is not yet clear whether the object Ringo Starr is already documented in i-views you can use the search button to ascertain this,

or via the icon button, select 'Choose relation target' from a searchable list with all feasible targets of relation.

Deleting the relation has a member may be accomplished in two different ways:

  1. Delete in the context menu using the button further actions  and the option 'delete'.
  2. With the cursor over the button further actions  and holding down the Ctrl key.

The target object of the relation itself will not be deleted as a result of this however. If an object has to be deleted this is done via the button  in the main window or via the context menu directly on this object.

Objects may also be created using the graph editor. This process is described in the following paragraphs.

By using the graph editor, knowledge networks with their objects and links can be depicted graphically. The graph editor may be opened on a selected object using the graph button:

The graph always shows a section of the network. Objects from the graph may be displayed and hidden and you can navigate through the graph.

In the graph editor not only a section of the network may be displayed: objects and relations may be edited as well.

On the left-hand side of a node there is a drag point for interaction with the object. By double-clicking on the drag point all user relations of the object will be displayed or hidden.

Linking objects via a relation is carried out in the graph editor as follows:

  1. Position the cursor over the drag point to the left of the object with the left mouse button.
  2. Drag the cursor in a held down position to another object (drag & drop). If several relations are available for selection, a list will appear with all feasible relations. If there is only one feasible relation between the two objects, this will be selected and no list will be shown.

In order to display objects in the graph editor there are different options:

  • Objects may be dragged from the hit list in the main window to the graph editor window using drag & drop.
  • If the name of the object is known it can be selected via the context menu using the function "show individual".

If an object is to be hidden from the graph editor, it may be removed from there by clicking it and dragging it from the graph editor holding down the Ctrl key. In doing so, there will be no changes in the data: the object will exist unchanged within the semantic network but it will not be displayed anymore in the current graph editor section.

New objects may also be created in the graph editor. To do this we drag & drop the type of object from the legend on the left-hand side of the graph editor to the drawing area:

If there are no types of objects to be seen in the legend you can search for them using a right mouse click in the legend area. Following this, the name of the object will be given.

The editor will re-appear in which the possible relations, attributes and enhancements for the object can be edited.

The name can be changed later on in the Admin tool or the Knowledge Builder. The user created in this way automatically has graph administrator rights. Right-clicking the object in the context menu allows other operations to be executed. For the most part, this context menu provides the same functions as the form editor, however also includes other graph editor-specific components.

The following graph editor-specific functions are available in this context menu:

  • Hide node: The node can be hidden here.
  • Navigation - Extensions: Opens the extensions for an object.
  • Navigation - Calculated relations: Opens the calculated relations for an object.
  • Navigation - Fix: Fixes the position of a node in the graph editor, so that it is not repositioned even when the layout is restructured. The fixed node can be undone using the Release option.
  • Navigation - Shortest path

The menu "View" provides many more functions for the graphic illustration of objects and types of objects:

Default settings: Opens the menu with the default settings for the graph editor. This menu is also available in: global setting window -> register card "personal" -> graph. There you can set whether attributes, relations and enhancements should appear in a small mouse-over-window above the object and how many nodes at a maximum will be visible in one step:

  • Show bubble help with details: if the mouse pointer stops on one node the details of the first ten attributes and relations will be displayed in a yellow window if bubble help was previously activated. (check "show bubble help with details" in the global setting window register card "personal" graph)
  • Max nodes: if a node/object has a lot of adjacent objects it often doesn't make sense to show them all by clicking on the drag point.

Change Background: The background color can be changed or a picture can be set as background.

Auto hide nodes: automatically hides surplus nodes as soon as the number of desired nodes is exceeded and shown. The number can be set in the input field "max. nodes" in the toolbar:

Auto layout nodes: automatically implements the layout function for newly displayed nodes.

Fix all labels: using this option the names of all relations are always visible, not only when rolled over with the mouse. Alternatively, the description may be fixed directly in the context menu of a relation.

Show internal names: displays the internal name of types of in brackets

recover hidden edges: all edges hidden by means of the context menu are shown again

The window of the graph editor and the main window of the knowledge builder provide even more menu items which may offer support when modelling the knowledge network.

On the left-hand side of the graph editor window there is the legend of the types of objects.

This legend shows the types of objects for the specific objects on the right-hand side.

By dragging & dropping an entry from the legend into the drawing area you can create a new specific object of the corresponding type.

Via the context menu for the legend entries all specific objects can be hidden from the image. Here you can also "hold" legend entries and add new types of objects to the legend (regardless of whether specific objects of this kind are represented in the image).

If the drag point has been clicked to show the adjacent objects a selection list will appear instead of the objects.

Detailed View: by default the option "Detailed View" is selected once the knowledge builder is started. You can navigate to other nodes by a double click on the drag point to the left of the object:

  • the top drag point displays the type of a specific object or the supertype of a type 
  • the lower drag point leads to the subtypes of a type
  • the drag point on the left shows relations to other objects

When the check is removed from the option "Detailed View" instead a box with a plus sign is shown:

With this plus sign you can only show those adjacent objects linked with the displayed object via relations. In the case of several links the dialogue with the selection list will likewise appear.

The menu graph contains more functions for the graph editor:

Bookmarks: parts of the knowledge network or "sub-networks" can be saved as bookmarks. The objects are saved in the same position as they are placed in the graph editor.

When a bookmark is created it may be given a name. All nodes contained in the bookmark are listed in the description of the bookmark.

Bookmarks, however, are not data backups: objects and relations which were deleted after a bookmark was saved are also no longer available when the bookmark is shown.

History: using the buttons "reverse navigation" and "restore navigation" elements of a (section of) a knowledge network may be hidden again in the order of sequence in which they were shown (and vice versa). Furthermore, these buttons reverse the auto layout. The buttons can be found in the header of the graph editor window or in the menu "graph".

Layout: the layout function  enables you to position nodes automatically when many nodes are not allowed to be positioned manually. When more nodes are displayed they will also be automatically positioned in the graph via the layout function.

Copy into the clipboard: this function creates a screenshot of the current contents of the graph editor. This image may then be inserted into a drawing or picture processing programme, for example.

Print: opens the dialogue window for printing or for generating a pdf file from the displayed graph.

Cooperative work: this function enables other users to work on the graph mutually and simultaneously. All changes and selections of a user on the graph (layout, showing/hiding nodes, etc.) will then be shown to all other users synchronously.

The principle of the type hierarchy was already presented in Chapter 1.2. If new types are to be created this is always done as a subtype of a type which already exists. Creating subtypes can be carried out either via the context menu Create -> Subtype

or in the main window using the tab "Subtypes" above the search field and the tab "new":

 

Changing the type hierarchy

In order to change the type hierarchy we have the tree of object types in the main window and the graph editor.

In the hierarchy tree of the object editor we will find the option "Removing supertype x from y" in the context menu.

Using this option we can remove the currently selected object type from its position in the hierarchy of the object types and with drag & drop we can move an object type to another branch of the hierarchy. If we hold down the Ctrl key when using the drag & drop function the object type will not be moved but additionally assigned to another object type. What still applies is: the hierarchy of the object type allows multiple assignments and inheritance.

 

Configuring object types with properties

In the simplest case we define relations and attributes with an object type such as "band" or "person" and thus make them available for the specific objects of this type. (For example the year and location the band was established, date of birth and gender of people, location and date of events.)

If the object type for which the properties are defined has more subtypes the principle of inheritance will take effect: properties are now also available for the specific objects of the subtypes. Example: as a subtype of an organisation, a band inherits the possibility of having people as members. As a subtype of "person or band" the band inherits the possibility of taking part in events:

The editor for the object type "band" with directly defined and inherited relations there.

With a specific object the inherited properties are available without further ado and the difference goes without notice.

 

Defining relations

When dealing with relations, the following basic principle governs at i-views: a relation cannot only be unidirectional. If we know of a relation for the specific person "John Lennon" to be "is a member of the band The Beatles" it then implies for the Beatles the contents "it has a member called John Lennon". These two directions cannot be separated. Therefore, i-views demands from us the types of source and target of the relations when creating new relation types – in our example that would be person and band as well as differing names: "is member of" and "has member".

Hence the relation is defined and can now be moved between objects using drag & drop.

 

Defining attributes

When defining new attribute types, i-views needs, above all, the technical data type as well as the name. The following technical data types are available:

Type of data

What do the values look like?

Example (music network)

Attribute

abstract attribute, without an attribute rating

 

Selection

freely definable selection list  

design of a music instrument (hollowbody, fretless, etc.)

Boolean

»yes« or »no«

music band still active?

Data file

random external data file which will be imported into the knowledge network as a »blob«

WAV file of a music title  

Date

date dd.mm.yyyy (in the German language setting)

publication date of a recording medium

Date and time

date and time dd.mm.yyyy hh:mm:ss

start of an event, e.g. concert

Colour value

colour selection from a colour palette

 

Flexible time

month, month + day, year, time, time stamp

approximate date when a member joined a band

Floating point number

numerical value with a random number of decimal places  

price of an entrance ticket to an event

Integer

numerical value without decimal places

runtime of a music title in seconds

Geographical position

geographical coordinates in WGS84 format

location of an event

Band

without attribute rating, serves as a medium for meta attributes to be grouped

 

Internet link

link on a URL

website of a band

Interval

date interval: interval of numbers, character string, time or date

period of time between the production of an album and its publication

Password

per attribute entity and password a clearly hashed value (Chaum-van Heijst-Pfitzmann) which is only used to validate the password  

 

Reference to  [...]

reference to parts of the network configuration: search, diagram of a data source, scripts and files – is used for example in the REST configuration

 

Character string

random sequence of alphanumeric characters

review text to a recording medium

Time

time hh:mm:ss

duration of an event

The intention of using these data types is not to define everything as character strings. Technical data types in a defined format later offer special feasibilities of inquiring and comparing. For example, numerical values may be compared to larger or smaller values within the structured queries and a proximity search can be defined for geographic coordinates, etc.  

Via the button "add relation" in the object editor the editor starts to create a new relation type.

Editor for creating a new relation type (see also Chapter 2.1 Defining types)

Name of new relation: names for relation types may be chosen freely within i-views but should be selected under the premise of a comprehensible data model. The following convention may be of help for this: the name of the relation is phrased in such a manner that the structure [name of the source object] [relation name] [name of the target object] results in a comprehensible sentence:

[John Lennon] [is a member of] [The Beatles]

Furthermore it is helpful when the opposite direction (inverse relation) takes on the word selection of the main direction: "has a member / is a member of".

Domain: here we define by which object types the relation has to be created: one object type forms the source of the relation and another object type the target. The tareget object type, in turn, forms the definition area of the inverse relation. To simplify matters, when creating you may only enter one object type at this stage. Afterwards, further object types may be defined in the editor for the relation type (see below).

Via the button "define new attribute" in the object editor the editor starts to create a new attribute type:

Two-tier dialogue for creating a new attribute type

In the left-hand window the format of the attribute type is defined (date, floating point number, character string, etc.) After selecting and confirming the attribute type it can be further specified with the name of the attribute in the subsequent dialogue.

Supertype: here it is defined at what level in the hierarchy the attribute type should be placed.

May have multiple occurences: attributes may occur once or more than once, depending on the attribute type: a person only has one date of birth but may, for example, have several academic titles at the same time (e.g. doctor, professor and honorary consul).

The dialogs for creating new attribute and relation types are limited views of the attribute and relation type editors. To edit details of relations and attributes, editors must receive and enhanced scope of functions.

You get to these two editors via the listing of relations and attributes on the Schema tab of the object editor:


Alternatively, you can use the hierarchy tree on the left side of the main window for access. The hierarchies for relation and attribute types are located underneath the object types. The editors are started by right-clicking on the relation or attribute to be edited in the context menu and choosing Edit .


Next, we will look at the details of the definition of properties by using the relation type editor as the example (the attribute type definition is a subset thereof):


Defined for: Here we can subsequently check for which object types the relation can be created. Relations can be defined between several objects and thus have several sources and targets. 

In this way, we can allow persons and bands to be authors of a song in the schema or assigned a location even if they do not have a super-type in common.
We can use the Add button to add additional objects. We can use Remove to prevent this object type and all its objects from entering into this relation.

Change makes it possible to replace an object type. Already existing relations are then deleted by the system. If there are relations to be deleted, a confirmation prompt appears before the change is made.

Target: Here you can change retrospectively for which types of objects the relation can be  used. To change the target object type you have to switch to the inverse relation type: The button for changing bears the label of the inverse relation type. After clicking on the button, the inverse relation appears in the editor and can be edited in the same way as the previous relation.

Abstract: If we want to define a relation which is only used for grouping but is not supposed to define concrete properties, we define it as “abstract.”

Example: If the relation Writes song is defined as abstract, this means: if we create songs and their relation to artists and bands, we can now enter specific information (who wrote the lyrics, who wrote the music). The unspecified relation Writes song cannot be created in the actual data but can only be used for queries.

May have multiple occurrences: One characteristic of relations is whether they may have several occurrences. For example: the relation Has place of birth can only occur once for each person whereas e.g. the relation is member of can occur several times for a person. Hence, logical matters can be modeled precisely. For example, musicians as persons can only have one place of birth but (at the same time) can also be members of several bands. Whether the relation can occur multiple times is specified independently for each direction of the relation: A person can only have one place of birth but the place can be the place of birth of several persons.

The option can only be deactivated if the relation does not occur several times in the actual data set. If it occurs several times, the system cannot decide automatically which of the relations is to be removed.

Mix-in: Mix-ins are described in the Extension chapter.

Main direction: Every relation has an opposite direction. In the core, the two directions are equivalent, but there are two places where it makes sense to determine a main direction:

  • In the Graph editor: Here the relations always present themselves in the main direction in relation to the direction of the arrow and labeling; irrespective of the direction in which they were created.
  • For single-sided relations (without inverse relation)

Additional setting options for relations and attributes are located in the “Definition” sub-item on the “Details” tab. The setting options under Definition are often used and that is why they are already available on the Overview tab. Under “Definition (advanced)” in contrast, there are setting options that are not required as frequently.

Behavior: This information is designed to assist i-views developers with debugging and can be ignored.

Counter: If a number is entered in the counter, this is the number with which objects of this type are counted up. The JavaScript functions getCounter(), increaseCounter() and setCounter() can be used to access the counter. 

Name attribute for objects: (Note: can only be set on object types, not relation or attribute types)
Typically many views in i-views only represent an object via its name (e.g. in object lists, hierarchies, in the Graph editor, the relation target search, etc.). Instead of the name you can use any other attribute of the objects here with which it can be represented. A prominent example for products: The article number.

Name attribute for types: This can be used also to select an alternative attribute for a more descriptive display for types.

Property can be iterated:
Selection options: Active / Write only / Inactive.
Default: Active.

Sometimes the maintenance of the index for iterating properties severely affects performance. This typically happens with meta properties such as “changed by” or “changed on” which do not necessarily have to be taken into account all the time. In such cases we recommend setting the properties to cannot be iterated by using the “Inactive” selection option. The purpose of “Write only” is to deny read access but still allow write access. This makes it possible to test for inadvertent side effects.

Reference value for minimum occurrence: This reference value relates to the user interface in Knowledge Builder and as of Version 5.3 it also affects the user interface in the web front-end and specifies the minimum number of times a property is supposed to occur on an object. If the number falls below the specified number, the property is displayed in red in the user interface but the object can continue to exist. An import ignores the reference value.

Reference value for maximum occurrence: As of Version 5.3, this reference value relates to the user interface in Knowledge Builder and the user interface in the web front-end. It specifies the maximum number of times the property should occur on an object. If the specified number is reached, no additional properties can be created. An import ignores the reference value.

In i-views you can make changes to the runtime of the model:

  • implement new types
  • make random changes to the type hierarchy (without creating tables and giving any thought to primary and secondary keys).

The system ensures consistency. When creating objects and properties the opposite direction of a relation is always included. Attribute values are checked as to whether they match the defined technical data type (for example, in a date field we cannot enter any random character string).

Consistency is also important when deleting: dependent elements always have to be deleted with them so that no remaining data of deleted elements stays in the network.

  • Thus, when an object is deleted all its properties will be deleted along with it. If, for example, we delete the object "John Lennon" we also delete his date of birth and his biography text which we can have as a free text attribute for each person, etc. Likewise, his relation "is member of" to the Beatles and "is together with" to Yoko Ono. The objects "The Beatles" and "Yoko Ono" will not be deleted; they only lose their link to John Lennon.
  • When deleting a relation the opposite direction is automatically deleted with it.

Since i-views always ensures that the objects and properties are in accordance with the model, deleting an object type or, where necessary, an operation has far-reaching consequences: when an object type is deleted, all its specific objects are also deleted – analogue to the relation and attribute types.

In this process, i-views always provides information on the consequences of an operation. If an object has to be deleted, i-views lists all properties which will thus be removed in the confirmation dialogue of the delete operation:

i-views controls where, by the change, objects, relations or attributes become lost. The user is made aware of the consequences of the deletion.

Not only the deletion, but also conversion or change of the hierarchy type may have its consequences. For example, when objects have properties which no longer comply with the model after a change in type or change in the inheritance.

Let us assume that we delete the relation "is supertype of" between "event" and "concert" and thus remove the object type "concert" and all its subtypes from the inheritance hierarchy of event to add them to "work", for example. In this case, i-views draws our attention to the fact that the "has participants" relations of the specific concerts would be omitted. This relation is defined in "event" and would thus no longer apply to the concerts.

There are possibilities for preventing the omission of relations as a result of model changes. If an object type has to move within the type hierarchy, for example, the model of the affected relation has to be adapted prior to this.

For example, if "concert" is to be located under "work" within the hierarchy and no longer under "event". To this end, the relation "has participants" will be assigned to a second source: that can be either the object type concert itself or the new item "work". The relation will hence not be lost.

i-views pays particular attention to the type hierarchy. If we delete a type from the middle of the hierarchy or remove a super/sub relation type, i-views then closes the gap which has ensued and puts back the types which have lost their supertypes into the type hierarchy to the extent that they keep its properties as far as possible.

 

Special functions

Changing type: objects already in the knowledge network may be moved to objects of another type. For example, if the object type "event" differentiates to "sports event" and "concert". If there are already objects of the type sports event or concert in the knowledge network, they may be selected from the list in the main window and quite simply moved to a new, more suitable object type using drag & drop.

Alternatively, we can find more information in the context menu under the item "edit".

Select type: using this operation we can assign a property to an object.

 

Reselect relation target: in relations this does not only apply to the source, but also the relation target.

Convert subtypes to specific objects (and vice versa): the border between object types and specific objects is, in many cases, obvious but not always. Instead of setting up only one object type called "musical direction" as in the case of our sample project, we could have set up an entire type hierarchy of musical directions (we decided against this in this network because the musical directions classify so many different things such as bands, albums and songs and therefore they do not provide any good types). It may happen, however, that we change our minds in the middle of the modelling. For this reason, there is the possibility of changing subtypes into specific objects and specific objects into subtypes. Any relations which may already exist will be lost in the process if they do not match the new model.

Converting the relation: source and target of the relation will remain the same, only the relation type will be converted.

Converting the attribute: source/object will remain the same but it will be assigned to another attribute type:

When converting the individual relations we are usually quicker when we delete these and replace them with another one. However, it may happen that meta properties are attached to the properties which we do not want to lose. On the other hand, the converting operations are also available for all properties of a type or a selection thereof. A prerequisite is, of course, that the new relation or attribute type is also defined for the source and target objects.

If changes are made to the model, consideration should always be given to the fact that restoring a previous condition may only be carried out by installing a backup. Analogue to the related databases there is no "reverse" function.

Until now we have mainly been dealing with linking of specific objects within the graph editor. Presenting such specific examples, discussing them with others and, where necessary, editing them is also the main function of the graph editor. We can, however, also present the model of the semantic network directly using the graph editor, e.g. the type of hierarchy of a network.

Types of objects will then be displayed as nodes with a coloured background and types of relations as a dotted line:

Relation types in the graph editor

If until now we have been referring to relations in the graph editor, this concerned relation objects between specific objects of the knowledge network. Moreover, the general types of relations (hence the diagrams of the relations) may also be presented in the graph editor. A relation is depicted in the graph editor as two semi-circles which represent the two directions (main direction and inverse direction). Therefore, between these two nodes there is the relation "inverse type of relation":

The presentation of a type of relation and the hierarchy within the graph editor may be shown analogue to the object editor with all supertypes and subtypes:


Attribute types may also be depicted in the graph editor – they are shown as triangular nodes.


Analogue to the type of object hierarchy the hierarchy of the relations and attributes within the graph editor may be changed by deleting and dragging the supertype relation.

As a further means of modelling, i-views offers the possibility of enhancing objects.

For example, if a person performs the role of a guitarist in a band but plays another kind of instrument in another band. In addition, the person exercises the role of the composer.

The fact that one person can play different roles in a knowledge network may be regulated via a special form of a object type. This may not contain any objects, but enhance objects from another object type (e.g. in this case "person"). For this purpose, the object type "role" is implemented into the knowledge network, for example and the different roles created for persons as subtypes: guitarist, composer, singer, bassist, etc. In order that these "role object types" may enhance objects this function will be defined in the editor for the object type by checking the box "type can extend objects":

Enhancements are displayed in the graph editor as a blue dotted line:

As a result of this enhancement we have achieved several things simultaneously:

  • We have formed sub objects for the persons (we can also imagine these as sections or – with persons – as roles). These sub objects may be viewed and queried individually. They are not independent, when the person is deleted the enhancement "guitarist" along with the relations to the bands or titles are gone.
  • We have expressed a multi-digit content. We cannot express anything on separate relations between persons, instruments, title/band – in this case the assignment would no longer succeed.

For this purpose the relation "plays in the band" for the enhancement "guitarist" has to be defined. This effect that persons inherit an additional model via the enhancement may be helpful regardless of multi-digital contents.

From a technical point of view, the enhancement is an independent object which is linked to the core individual by means of the system relation "has enhancement" or inverse "enhanced individual". Its type (system relation "has a type") forms the enhancement type.

When defining a new enhancement, two object types play a role: in our example we want to give persons an enhancement and we have to provide this information to your type "person". The enhancement itself again has an object type (usually even quite a lot of object types); in our case "guitarist". With the type "guitarist" (and with all others with which we want to enhance the persons) his specific objects will be dependent.

When querying enhancements in the structure search we have to traverse individual relations. From the specific person via the relation "has extension" via the enhancement object "Guitarist". From there you can reach the band via the relation "plays in band".

Mix-in

The essence of this example with the role "guitarist" is that the relation "plays in a band" is linked to the enhancement but not with the person. Hence, a consistent assignment is possible with several instruments and several bands.

If the option mix-in is selected the relation, on the other hand, is created with the core object (person) itself. The reason for this is that enhancements are sometimes not used to express more complex contents but to assign an object polyhierarchically to different types. This object inherits in this manner relations and attributes of several types.

When we setup an extensive type hierarchy of events, for example, with the subdivision into large and small events, outdoor and indoor events, sports and cultural events, we can either characterise all combinations (large outdoor concert, small indoor football tournament, etc.) or create the different types of events as possible enhancements of the objects of the type "event". Then we can assign an event via its enhancements as a football tournament and, at the same time, as an outdoor event as well as a large event. Via the enhancement "football tournament" the relation "participating team" may then be inherited, via the enhancement "outdoor event", for example, still the property "floodlight available". When we have placed these properties in mix-in they may be queried like direct properties in the events.

If a mix-in enhancement is deleted it acts like a "normal" enhancement: there has to be at least one enhancement available which entails the mix-in property. When the last of these enhancements is deleted the relation or the attribute in the core object is also deleted.

A special form of the relation is the shortcut relation. Hidden behind this is the possibility to shorten several relations already available and defined and which are present in the semantic graph database in a connected row by means of a suitable relation. In this manner the system can, to a certain extent, draw a direct conclusion from A to B from an object A in the semantic graph database which is connected to an object B via several nodes.

For example, a band publishes a recording media in a certain style of music, ergo this style of music can likewise be assigned to this band:


In the form editor the inferred relation path is defined via the relations "is author of" and "has style".


In the queries the shortcut relation can be used like any other relation as well.

In the current version of i-views it is recommended that several nodes and edges be queried via search modules as a result of the improved overview in the structured queries.

In Chapter 2.1 properties of a lesser complexity in object types for objects were defined.  For example, users can add or edit contents to the music knowledge network which we are treating here as an example via a web application. It should, however, be noted which information was changed from whom and when. To do this, attributes and relations and, in turn, for attributes and relations are required in all combinations.

Attributes to attributes: for example, discussions and reviews are listed in the music knowledge network as text attributes for music albums. If it is to be noted when discussions and reviews were added or when they were last changed we can define a date attribute which is assigned to the discussion and review attributes:

Attributes to relations: This date attribute may also be located at a relation between albums and personal sentiments such as "moods" if the users are given the possibility of tagging:

Relations may be used on attributes and on relations. For example, those users should be documented who have created or changed an attribute (e.g. review of an album) or a relation between an album and a mood at certain times:

These examples together with the editing information form a clearly demarcated meta level. Properties of properties are, however, usable for complex "primary information":

If, for example, the assignment of bands or titles to the genres be weighted, a rating as "weight" may be given to the relation as an attribute.

An attribute of a relation may also be the sum of a transfer or the duration of participation or membership.

Relations to relations may also be expressed as "multi-digit contents". For example, the fact that a band performs at a festival (that is a relation) and in doing so takes a guest musician with them. He doesn't always play with the band and hence doesn't have a direct relation to it. Likewise, he cannot be generally assigned to the festival but is assigned to the performance relation.

Modelling of meta properties may, of course, also be realised by implementing additional objects. In the last example the fact that the band performed at a festival enabled an object of the type "performance" to be modelled. A significant difference is that in the meta model the primary information can simply be separated from the meta level: the graph editor does not show the meta information until it is requested and in queries, also in the definition of views the meta information can simply be left out. The second difference lies in the delete behaviour: objects are viable independently. Properties, even meta properties, are not on the other hand; when primary objects and their properties are deleted the meta properties are deleted with them.

Incidentally: properties can not only be defined for specific objects but also for the types themselves. A typical example of this is an extensive written definition with a object type, e.g. "what do we understand by a company?" That is why we are always asked whether we want to create them for concrete objects or subtypes when creating new properties.

The attributes "character string", "data file attribute" and" selection" may be created multilingually. In the case of the character string attribute and data files, several character strings may then be entered for an attribute:

With data file attributes several images (e.g. with labels in other languages) may be uploaded analogically. In the case of selection attributes all selection options are deposited in the attribute definition; here it doesn't matter in which language the selection for the specific object is made.

All other attributes are depicted in the same manner in all languages, e.g. Boolean attributes, integers or URLs.

If the image deviates in other languages attributes adapt their image automatically, depending on the language: for example, dates according to European spelling day|month|year are shown in US format month|day|year.

In i-views separate attributes are not simply deposited for values in other languages, instead they remain as a separate layer for an attribute with language variations. You don't have to bother about the management of different languages when developing an application, but only the desired language for the respective query:

In i-views preferred alternative languages can be defined: if there is no attribute value, e.g. a descriptive text in the queried language the missing text can be shown in other languages if they are available. The order of sequence of the alternative languages may also be defined.

Multilingual settings are, for example, used in search.

Indexing forms part of the internal data management of databases. Used correctly, the setting of indexes can improve performance significantly.

Background: In i-views, all semantic elements (types or objects) are generally stored in a cluster with their properties (attributes or relation halves). For certain transactions or uses, however, it can be better to only load part of the information. Instead of having to load the entire elements or clusters to read a few properties for queries, a corresponding index is used to refer exclusively to the required properties. Metaphorically, these indexes are both signposts and shortcuts to the required partial information.

The requirement for indexing in structured queries or during import mapping becomes apparent through various notes: In import mapping, if an object is not identified using the primary name, as expected, but through a different attribute, the note appears: “No usable index for [...].” 


Import mapping with message regarding missing index

 


Structured query with message regarding missing index

 

Indexing can improve performance in particular when it comes to writing data (= importing).

Indexing is required for:

  • Transaction:
        Read transactions: Search/structured query; view configuration
        Write transactions: Imports (import mapping)
  • Checking rights

Depending on the intended use, suitable indexes must be selected for certain attributes or relations.

The indexes are defined in the Knowledge Builder settings. The assignment of the indexes can take place either in the settings of the KB or in the Detail editor of a type (Details > Indexing > Assign index).

Available indexes (Settings > Index configuration)

All indexes created in the Knowledge Builder can be managed centrally in the settings.


Category “Indexes”

This setting option can be used to manage the index structures. All available index types are listed under “Available indexes”. Each index type can be used for specific types of attributes or relations.

If an index is shown in grey, then the index is currently deactivated; if it is highlighted in red, then the index is currently not synchronous.

There are buttons to generate, delete, configure, assign and synchronize on the right-hand side.

Index Use
Lucene full text index (JNI) Full text query
Metrics Performance improvement in structured queries
System System relations (predefined, cannot be changed) This is used for “extends object” / “has extension” / “is super-type of” / “is subtype of” relations
topic -> value To list attribute values/relation targets in object lists
topic -> value (domain segmented) To list attribute values/relation targets in object lists
value -> property For single-sided relations, results in a speed-up for weighted inverse single-sided relations
value -> topic Attribute values for an object
value -> topic (unique) Attribute values that may only occur once per attribute type for an object (write rights check for imports)
value -> topic (word)[string splitting] CDP-specific: This is only used in i-views content
value -> topic for subject keys (word)[string splitting] CDP-specific: This is only used in i-views content
Full text index for terms [string splitting] CDP-specific: This is only used in i-views content

 

Category “Index for relations"/ “Index for attribute value

Indexes can be divided up using different aspects. First of all, a distinction can be made between forward and reverse indexes. In the case of the reverse indexes, it may make sense to refer to the property from target/value to resolve the metaconditions on the property. Ultimately, an index can optionally perform a segmentation by each type of source object in order resolve structured queries that are limited to objects of subordinate types more efficiently.

Some properties may not require an index depending on the specific application. (They can then be marked with “Ignore”. They are not examined further in this optimization step.)

  • Relations can use a reverse index instead of a forward index on the inverses – and vice-versa.
  • Attributes can also be indexed with modified/standardized values (e.g. full text with basic word forms). A corresponding operator can then be used for search for these.

 

Applicable indexes (detailed configuration)

The indexes that can be used for a relation type or attribute type can be assigned using the detailed configuration.


Assigning indexes in the detailed configuration of type

Attribute types Relation types
topic -> value topic -> value
topic -> value (domain segmented) topic -> value (domain segmented)
value -> property value -> property
value -> topic value -> topic
value -> topic (unique)  

 

In the settings of Knowledge Builder, a new index can be created under:
Settings > Index configuration > Indexes > Create new

The following selection is available at the start:

Index Use
Lucene full text index (JNI) Full text query
Redundant storage for relation attributes To display meta properties of symmetric relations more quickly; this is used without additional filters
Pluggable indexer Combined use of distributor and index modules for adapted indexing; specific configuration by means of index filters is possible

The following section describes the configuration of the pluggable indexers because these can be used most flexibly and cover almost all use cases.

Addable index modules

Pluggable indexers enable the administrator to create an indexer from prefabricated modules in order to achieve the corresponding indexer behavior.

A pluggable indexer consists of distribution levels that are closed by an index level that regulates data storage. Hence, an indexer can index both attributes and relations.

If the indexer is assigned an optional index filter, the indexer behavior can be influenced further; only suitable property types can then be assigned to the indexer.

Since properties include attributes and relations, the following section refers to an attribute value or relation target as a value of the property.

 

Pluggable indexer

 

 

 

T = Topic = object/element/instance

P = Property = attribute/relation

"V" = Value = attribute value/relation target

 Distributor/index  Use
Distributor by domain
(after that, all other distributors can be selected
)
To search for a subset of object types that jointly use a property
Distributor for each property type
(index can be selected afterwards:)
Distinction between attribute and relation
    Index property on value/target

Attribute -> Attribute value,
Relation -> Target object/target type

 

To find relation targets in structured queries with a restriction on the meta property
    ‚Ě∂ Index object on value/target
    
= topic -> value
    = topic -> value (domain segmented)

Object -> Attribute,
Abject -> Target object of relation

 

To list attribute values/relation targets in object lists
    ‚Ě∑ Index value/target on property
    
= value -> property

Attribute value -> attribute
Meta-relation target -> Attribute
Relation target -> relation
Meta-attribute (value) -> Relation


For single-sided relations, results in a speed-up for weighted inverse one-way relations
    Index value/target on property     (uniqueness check)

Attribute value -> Attribute


To search for meta properties
    ‚Ěł Index value to semantic element
    
= value -> topic

Attribute value -> Attribute
Relation target -> Relation


To support structured queries on objects with specified values/targets on attributes/relations
    ‚Ěł Index value to semantic element
    (uniqueness check)
    
= value -> topic (unique)
Attribute value -> Object (e.g.: email address)
Distributor for each property value Together with “Index property”:
For compact storage of many identical values/targets; same response as for “Index value/target on property”
Distributor for each object For single-sided inverse relations
Index redundant storage for relation properties

(Might not be used in combination with pluggable indexes)

Faster display of meta properties on relations when using symmetric relational properties

 

Filter
Filter typeUse
LatitudeFor indexing an attribute type of the value type “geographical position”
LongitudeFor indexing an attribute type of the value type “geographical position”
Interval start valueFor indexing an attribute type of the value type “interval”
Interval stop valueFor indexing an attribute type of the value type “interval”
String filtering.
Strings to words filter.

 

A distinction is made between the breakdown indexer modules and the indexing indexer modules. A breakdown indexer module partitions the index according to different aspects. Following that, there is either another breakdown or an indexing index module that stores the index entries.

 

The figure shows an example of how a stackable indexer consisting of three modules (without value filter) groups the index entries. This index can now efficiently provide answers to questions such as

  • Which animals start with S
  • Which plants either other organisms
  • Which animals eat zebras (T03)
  • etc.

Questions such as

  • Which organisms start with S
  • Which organisms eat flies (T05)

could also be answered. To do so, an indexer configuration without “Distributor by domain” would suffice (and might be more efficient depending on the data situation).

The most important module, without which most indexing modules cannot be added. It generally appears in first place and partitions the entries according to their property type.

Enables partitioning according to the relevant terms of the property-carrying objects. The module is only useful for properties of individuals.

If a property can occur in multiple object types and a search only searches for a subset of these object types, this module accelerates the search through corresponding index access.

This module can be used for indexing to summarize the relation targets on the source object. As the previous module, it is used for mapping older indexers and its K-Infinity 3.1 only makes sense for single-sided inverse relations.

This index module is used to store an attribute value on an object or a relation target on the source of a relation in the index. This type of indexing makes sense if expert queries for objects with specified values on indexed attributes (e.g. with specified target on indexed relations) are supposed to be supported.

The index module indexes in the exact opposite way as the “Index value to semantic element” and, for attributes, can be used to determine the column values of the indexed attributes for object lists. For relations, it can be used in the same way as the “Index value to semantic element” if either the inverse relation is indexed or the source object is already more restricted by the search than the target object.

If you want to support expert queries with the indexed relation in both directions (source-target and target-source), the relation can be indexed either with this value and the “Index value to semantic element” or the relation and its inverse relation can both be indexed with one of the two index types. Here, it can make a difference if the index module is combined with a “Distributor by domain” because use of this distributor module for an index on the inverse relation can be used for partitioning by means of the target domain.

This index module is used to store values on the attribute or target on a relation in the index. This type of indexing makes sense if searches for additional meta properties are supposed to be supported for the indexed attributes. To ensure this index can also be used in a search for the objects of the property (analogous to “Index value to semantic element”), the respective property must remain set to “Active” under “Property can be iterated” in the corresponding term editor.

This index module supports expert queries to search for targets of the relations. To do so, the meta properties of the relation are used for a highly restricted process. Simple source-target conditions are not, however, supported.

Together with the distributor for each property value, the same behavior can be achieved as for an index value / target to property. If there are a great many identical values or targets, this makes it possible to achieve more compact storage; otherwise, this combination has no advantages.

This index only stores the attribute values or relation targets. Using it makes sense if a “Distributor for each object” is used upstream and few objects have many values/targets.

This module can only be used by itself and is used to display the meta properties on relations more quickly if symmetric relational properties are used. No index structure is created at the technical level but the indexer can be addressed via the same configuration and programming interfaces.

The Index value to semantic element and Index value to property modules can be supplemented with a uniqueness check. The modules supplemented in this way are usually used for the consistency check of unique identifiers. They are available in the selection list for the addable index modules (e.g. Index value to semantic element (uniqueness check)).

If a new value is to be written and the same value is found in the index, this new value cannot be adopted. Values are recognized as identical if they are also grouped identically by all distributors of the index. If, for example, you want to perform a uniqueness check by domain only (this, for example, makes it possible for “modern” to coexist as an individual of verb and as an individual of adjective), the index must contain a Distributor by domain.

If a value filter is also configured, the uniqueness check is executed on the filtered values. This makes it possible, for example, to identify “arm” and “Arm” as identical.
Note: a value filter that splits strings (for full text) can be combined with the uniqueness check, but this is not usually sensible, because even a partial string can lead to duplicates after splitting, for example “The house” and “house and home.”

The Index value to semantic element cannot recognize duplicate values of this property as duplicates in an object if properties occur multiple times. It is therefore possible for two identical attributes with identical values to exist in the same object, but not in different objects. If you want to prevent this, you must deactivate multiple occurrences in the attribute term or instead use an Index value/target to property for the uniqueness check.

No atomic attribute value can be indexed for geocoordinates and interval attributes. Instead, longitude and latitude or interval start value and interval stop value are used to index one component of the value. For complete indexing, a corresponding indexer for the other component of the value must be configured respectively.

Full text filters for strings can be configured in the Admin tool. These can be used to configure which manipulation is possible on the strings, and how the strings should be split into individual words. Additional operators are then offered in expert queries, to which the respective filter label has been added, to allow a specific query to be executed using this filter.

Strings can be indexed in manipulated form by means of “string filtering,” and when a query is executed, this results in all attribute values being interpreted as hits which the filter maps to the same string as the search input.

By means of “string splitting,” several (manipulated) sub-strings (tokens) from a text can be indexed. The related index then allows expert queries that execute a search within the string by means of the operators “Contains words” and “Contains phrase.”

An attribute “Average number (calculated)” can be created on all property types. The value of the attribute specifies how many values of the corresponding property an object from the property domain has on average.

This information enables structured queries to better decide how they determine their result set. In addition, you can create an attribute “Average number (manual)” whose value overwrites this value. (This makes sense if the domain is abstract but the property in enquiries is supposed to be used only when it actually occurs.)

Querying of the semantic network has various subtasks for which we can configure different search modules: often we would like to process the user's entry in a search box (character strings). Usually we would like to pursue the links for the queries within the semantic network.

  • Structured queries
  • Simple/direct queries (simple search, full text search, trigram search, regular expressions, parameterised hit quality)
  • Search pipeline

 

Using structured queries you can search for objects which fulfilled certain conditions. A simple example for a structured query is as follows: all persons who master a certain instrument should be filtered.


At first there is the type condition: objects of the type person are searched for. The second condition: the persons have to master an instrument. Third condition: this instrument has to be the violin.

In the structured query the relation "plays an instrument", the type of the target of the relation and the value of the target "violin" form three different lines and thus also three search modes. The third condition that the instrument has to be a violin may also optionally be omitted. In the hit list you would then find all persons who play any random instrument.

Often conditions (in this case the instrument) should not be determined previously but be approved completely. Depending on the situation, an instrument may be given as a parameter in the application:

The conditions may thereby be randomly complex and the network traversed as far as possible:

Slightly more complex example: persons or bands who deal with a certain issue in their songs (to be more exact in at least one). In this case you do not search for the name but the ID of the issue as the parameter – typical for searches, for example, which are queried via a REST service from the application [Figure – "ID" instead of "name"]

The type hierarchies are automatically included in the structured queries: The type condition  "work" in the search box above includes its subtypes albums and songs. Even the relation hierarchy is included: if there is a differentiation below "is author of" (e.g. "writes text" or "writes music") the two sub-relations will be included in the search. The same applies for the attribute type hierarchy.

 

Interaction

If a new structured query is created, the topmost of all types is entered at first per default. In order to limit the query even more you can simply overwrite the name or select "Choose type" by clicking on the icon.

The button allows you to add more conditions to the structured query. Deleting conditions takes place at the beginning of each line where the type condition is listed (relation, attribute, target, etc.). When you click on the button the following menu will appear which may vary slightly depending on the context.

From all possible conditions, focus has, until now, been on the very first item in the menu. A complete explanation of all conditions and options of the structured queries can be found in the next chapters.

One of the main purposes of structured queries is to provide information on a certain context in applications. The structured query from the last section, for example, can enable end users in a music portal to generate a list of all artists or bands who cover subjects such as love, drugs, violence etc. in their songs.

To do so, the structured query is usually integrated into a REST service via the query’s registration key. We include the subject in which the user is interested as a parameter in the query with the user’s ID.

Example scenario: A user enters a search string to search for their topic. Hence, there is no ID but only a string that is to be used to identify the topic. However, the query result is supposed to show immediately which bands have written songs on the subject. For this purpose, a structured query can be integrated into a search pipeline as a component - after the query that processes the search string.

One of the reasons why structured queries are such a central tool for i-views is that the conditions for rights and triggers are defined with structured queries. Let’s assume the only people allowed to leave comments in a music portal are artists and bands. In the rights system, you can thus specify that only artists and bands that have written at least one song on a topic may leave comments on this topic. Structured queries can also be used in exports to determine which objects are to be exported.

All these uses have one thing in common: we are only interested in qualitative, not weighted statements. This is the domain of structured queries in contrast to search pipelines.

Last but not least, structured queries are also important tools for us as knowledge engineers. We can use them to get an overview of the network and compile reports and to-do lists. Here are some examples of questions that can be answered using structured queries:

  • Which topic is featured by many artists/bands?
  • Do specific topics have to be removed because too many relations have amassed or conversely should rarely used topics be merged or closed?

For ease of use, it makes sense to be able to organize structured queries in folders.

One main purpose of the structured queries lies in the applications for a certain context to provide information. The structured query in the last paragraph may generate, for the user, a list of all artists or bands in a music portal who deal with the topics of love, drugs, violence, etc. in their songs.

For this purpose, a REST service is usually implemented via their registration key. The topic in which the user is interested just now is the one we enter into the query with his ID as a parameter.

The following is an example of this scenario:  a user searches for a topic by entering a search string according to a topic. There is therefore no ID, but only a string on the basis of which the topic should be identified. Thereby, the search results should show which bands wrote songs on this topic. For this purpose, a structured query can be installed in a search pipeline as a component – behind the query which processes the search string.

Therefore, structured queries are, among other things, a central tool within i-views because the conditions for rights and trigger are also phrased using the structured queries: assuming only artists and bands are allowed to leave comments within a music portal. With the rights management it may be stated that only artists and bands who have written at least one song on a certain topic are allowed to leave their comments on this topic. Structured queries may also be used in exports in order to determine which objects are to be exported.

All these applications have one thing in common: we are only interested in qualitative, not weighted statements. That is the domain of the structured queries in comparison to the search pipelines.

Last but not least, structured queries are also tools for us knowledge engineers. With them we can obtain an overview of the network and compile reports as well as to-do lists:

  • To what topics there are how many artists/bands?
  • Do certain topics have to be removed because they have collected too many relations or should, on the other hand, sparse topics be compiled or closed?

For this application it makes sense to be able to organise structured queries into folders.

 

Implement

The structured queries are implemented on the results tab with the button search.

The search results can then be further processed (e.g. copied into a new folder) but they are not kept there permanently.

The path which the structured query has taken may only be viewed in the graph editor to backtrack it. To this end, one or more hits are selected and displayed using the button graph.

A structured query may be copied in order to create different versions, for example. Likewise there is the possibility of saving them in XML format, regardless of the network. The structured query may therefore be imported into another network. However, this is limited to versions of the same network, e.g. to backup copies, because the structured query references types of objects, relations and attributes via their internal IDs.

Very indirect conditions can be expressed within structured queries: you may randomly traverse between the elements throughout the structure of the knowledge network. Artists and bands may be found who wrote songs on certain topics but which we cannot name specifically using their titles.

Condition chains may either be randomly deep or several parallel conditions may be expressed: additional conditions are added to any random condition element as a further branch:

Several conditions: English bands with songs on a certain subject

In the example mentioned above only artists or bands can be found who created songs on a defined subject and who come from England. If, instead, we want to find all artists and bands which fulfil one of the two conditions they will be expressed as 'alternative'. By clicking the symbol of the condition in the form of the relation "is the author of" you can select an alternative from the menu:

Alternative conditions – the band either has to be English or have songs on a certain subject

If there are further conditions outside the alternative bracket there are objects in the hit list which fulfil one of the alternatives and all other conditions.

Let's assume the bands are assigned to either cities or countries within the network. Of these, in turn, it is known which cities are in which countries. In order to document these contents in the search it was possible to very simply expand the condition string: we were able, for example, to search for bands which are assigned to a city which, on the other hand is in England. However, in this manner those bands will not be found which are directly assigned to England. In order to avoid this we can state in the relation "is located in" that it is optional and therefore does not have to be available.

 

Simultaneously, we can also include hierarchies which are several levels deep using the function "Repetitions". For example, is known from the band ZZ Top that they come from the city of Houston which is in Texas. In order to also retain the band as a result when bands from the USA are queried we can state in the relation "is located in" that this relation has to be followed up until repetitions are reached:

 

Conditions can likewise be purposefully negated. For example, if punk bands are searched for, which do not come from England. To this end, the negative condition is setup as a so-called "Utility query".


The utility query delivers bands from England – from the main search a reference can be established and thereby noted that the search results are not at all allowed to comply with the criteria of the search help – in this manner we remove the results of the search help from those of the main search and only obtain bands which do not come from England.

Interaction takes place as follows: the utility query is compiled in the type condition and can, after conclusion of the main search above, be linked with the menu item "reference". At this stage you can then select which type the reference should be (in this case negative).

The reference allows references to be made to other conditions of the same query within a structured query:

Here the last condition references the first one, i.e. the band who writes the cover version also has to be the author of the original. Without a reference the search would read as follows: bands which have written songs which cover other songs which were written by any (random) bands. Incidentally, the result is, for example, the band "Radiohead" (they covered their own song "Like Spinning Plates").

Search macros: other structured queries but also other searches can be integrated into structured queries as macros. In doing so, there is the possibility of outsourcing repeating, partial queries into your own macros and thus adapting the behaviour at a central location when changing the model. A macro can be integrated into each condition line.                        

An example from our music network: from bands to all their achievements, no matter whether they are albums, songs which are directly assigned to the band or songs on the albums of the bands. We need these partial queries more frequently, for example in a structured query which returns the bands to a certain mood. We start this query with a type condition - we are looking for bands - and integrate the pre-defined module as a condition for these bands:

The objects which return those which are integrated into the structured query as macros have, of course, to match the condition with which they are linked from the point of view of their type. With the aid of the identifier function, the query (from the "invoking" query) can still be continued with additional conditions. In our case the albums and songs from where the macro query originates are defined by the invoking query: namely albums and songs with the mood "aggressive". Integrating the search macro into the structured query is carried out through the menu "Query structure". Under structured query macro (registered) there is a selection list with all the registered macros.

Simple search: using the search mode "simple search" the results of a simple search or a search pipeline may serve as input for a structured query. Each respective simple search can be selected by means of the selection symbol. The input box contains the search entry for the simple search. Further conditions can enable a simple search to be filtered further, for example.

Cardinality condition: a search for attributes or relations without its own conditions may be carried out with a cardinality operator (characterised by a hash tag #). You may use the cardinality greater than or equal to, less than or equal to and equal. The normal equal operator of the relation or attribute condition corresponds to greater than or equal to 1.

We have thus covered everything we can find within the menu "Query structure":

The type condition

The beginning of the structured query determines which objects should appear as the results. To do this you click on the type icon for the first condition and select "Choose type" in the menu, the input mask then starts in which the name of the object can be entered.

Alternatively, you can simply overwrite the text behind the type icon with the name of the object.

In the second step the relation condition is added. For example, a search is made for the place of origin of a band and "has location" is set as a relation condition. The target type of the relation is added automatically which, however, can also be changed (if, for example, the "has location" relation for countries, cities and regions applies but we only wish to have the cities).

There are further functions available for a type condition. In the item for this there is the item "Schema" in the general condition menu which we can reach via the button : several types of conditions are defined consecutively what is interpreted by "or" in the query. For example, we search for works or events on a particular style of music as follows:

We can just search for types of objects instead of specific objects or both at the same time by checking the boxes "Subtypes" and "Instances" in the menu "Schema".

This is what the condition looks like when a search is made for both specific works as well as subtypes of the work (albums and songs).

Without inheritance: normally, the inheritance starts automatically with all types of conditions of the structured query. If a search is made for events in which bands play a certain style of music, all subtypes of events are then incorporated into the search and then we are provided with indoor concerts, club concerts, festivals, etc. In the vast majority of cases this is exactly what is desired. For exceptions there is the possibility of switching off the inheritance and restrict the search to direct objects of the type event, i.e. by excluding the subtypes of objects.

 

Operators for the comparison of attribute values

Attributes may also play a role as conditions for structured queries. For example, if it does not suffice to only identify objects which show an exact predefined value or the value entered as a parameter. For instance, bands which were founded after 2005 or songs which are more or less 3 minutes long or songs which contain the word "planet" in their title. These require comparison operators. The type of comparison operators which i-views offers us depends on the technical data type of the attribute:

Comparison operators for dates and quantities

The comparison operator Exactly equal constitutes a special case: the index filter is switched off and a search can be made after the special character * which is normally used as a wildcard.

The comparison operator Between requires spelling of the parameter value with a hyphen, e.g. "10.1.2005 - 20.1.2005".

The comparison operator Distance requires spelling of the parameter value with a tilde, e.g. "15.1.2005 ~ 5" – i.e. on 15.1.2005 plus/minus 5 days.

Comparison operators for character strings

Comparative value results from the script: attribute value conditions may be removed from partial searches and replaced by a script and attribute condition. The results of the script are then used as a comparative value for the attribute value condition, e.g. if the comparison operators do not suffice for a specific query.

 

Identifying objects

The structured query provides several options for identifying objects within the knowledge network. To simplify matters, the previous examples often defined the objects. This type of manual determination may, in practice, be of help in testing structured queries or determining a (replaceable) default for a parameter entry.

At this point we have already become familiar with the combination with the name attribute which can, of course, be any random attribute. In the menu item "Identify" we will find some more options for defining starting points for the structured query:

Access right parameter: the results of the query may be made dependent on the application context. This particularly applies in connection with the configuration of rights and triggers when, generally speaking, only "user" is usable.

Script: the objects to be entered at this point are defined by the results of the script.

Semantic element with ID: you may also determine an object via its internal ID. This condition is normally only used in connection with parameters and the use of the REST interface.

In folder: using the search mode "in folder" the contents of a collection of semantic objects can be entered into a structured query as input. The selection symbol will enable you to select a folder within the work folder hierarchy. The objects of a collection are filtered with respect to all other conditions (including conditions for terms).

Adding comments

Every condition in a structured query can be commented. For adding a comment, choose the option "add comment" in the context menu. At the condition in the structured query, an existing comment causes a blue indicator flag which shows up a text in case of mouseover.

By means of the dialog "Edit comment", the corresponding comment can be changed or removed:


 

The indicator flag for comments is not shown when the condition has a warning or a fault. In this case you only can see the yellow warning indicator or the red fault indicator. Additionally, all warnings, faults or comments will be listed in their order on the right side below the parameters editor.

Warnings and cautions can be suppressed in the indicator indication if you want to ignore them at this point (of course, this is not recommended). To do so, click on the indicator symbol in the listed view or choose the function "Suppress warnings" in the context menu of the condition. The indication can be reactivated on the same way or by choosing the context function "Show all warnings" of the root finder.

Processing the search queries of users may be carried out with or without interaction (e.g. with type-ahead suggestions). The starting point is, in any case, the character string entered. In configuring the simple search we can now define with which objects and in which attributes we search according to the user input and how far we differ from the character string entered. Here is an example:

How do we have to design and organise the search in order to receive the below feedback on objects from the entry "white"? In all cases we will have had to configure the query to show that we only want to have persons and bands as the results. How is it, however, if there are any deviations from the user input?

  • When is the (completely unknown) Chinese experimental band called "WHITE" a hit? If we state that upper case and lower case doesn't matter
  • When will we receive "Whitesnake" as a hit? If we understand the entry to be a substring and attach a wildcard
  • When "Barry Eugene Carter"? If we not only search through the object names but include other attributes as well – his stage name is namely "Barry White".

These options can be found again in the search configuration as follows:

Configuration of the simple search with (1) details as to which types of objects are to be browsed through, (2) in which attributes the search has to be made, (3) upper case and lower case and (4) placeholders.

Placeholder/wildcard

The entry is often incomplete or we want to retrieve the entry in longer attribute boxes. To do this, we can use placeholders in the simple search. The following settings for placeholders can be found in simple search:

 

  • Placeholder behind (prefix) finds the [White Lies] for the entry "white"
  • Placeholder in front (suffix) finds [Jack White]
  • Placeholder behind and in front (substring) finds [The White Stripes]
  • Caution! Placeholder in front is slow.

 

The option "Always wildcards" works as if we had actually attached an asterisk in front and/or behind. Behind automatic wildcards there is an escalation strategy: in the case of automatic placeholders, a search is made first with the exact user entry. If this does not deliver any results a search will be made with a placeholders, depending on which placeholders have been set. With the option prefix or substring there is once again a chronological order: in this case you look for the prefix first (by attaching a wildcard) and, if you still can't find anything, you make a search for a substring (by means of a prefix and attaching a wildcard).

If you are allowed to attach placeholders in your search you can state in the box minimal number of characters how many characters the search entry must show to actually add the placeholders. By entering 0 this condition is deactivated. This is particularly important if we set up a type ahead search.

With the weighting factor for wildcards you can adapt the hit quality to the extent that the use of placeholders will result in a lower quality. In this manner we can, if we want to give the hits a ranking, express the uncertainty contained in the placeholders with a lower ranking.

If the option "No wildcards" is selected the search entry will not be changed. The individual placeholder settings will then not be available.

The user can, of course, him/herself use placeholders in the search entry and these can be included in the search.

Apply query syntax: when the box for the option "Apply query syntax" has been checked a simplified form of the analysis of the search input is used in which, for example, the words "and" and "or" and "not" no longer have a steering effect. Nevertheless, in order to be able to define how the hits for the tokens should be compiled, the default operator can be switched to "#and" or "#or". What applies to all linking operators is the fact that they do not refer to values of individual attributes, but to the result objects (depending on whether "hits only for attributes" has been set). A hit for online AND system thus delivers semantic objects which have a matching attribute for both online and the system (which is not necessarily the same).

Filtering: simple searches, full-text searches and also some of the specialised searches may be filtered according to the types of objects. In the example described in the last paragraph we made sure that the search results only included persons and bands. Attributes which do not match a possible filtering are depicted in red bold print within the search configuration dialogue. In our case this could be an attribute "review", for example, which is only defined for albums.

Translated attributes: in the case of translated attributes we can neither select a translation, nor have the language dynamically defined. Search for multilingual attributes, then in the active language or in all languages, depending on whether the option "in all languages" is checked.

Query output: a maximum query output may be defined by entering the maximum number in the "results" box. This checkbox will then limit the query output and the mechanism can be activated or deactivated. By entering the number in the output the checkbox will automatically be activated. Caution: if the number is exceeded no output will be shown!

Server-based search: generally speaking, each search can also be carried out as a server-based search. The prerequisite for this is that an associated job client is running. This option can be used when it can foreseen that very many users will make search queries. By outsourcing certain searches to external servers, the i-views server will be disburdened.

 

 

In our examples for queries the users have, until now, only entered one search term. However, what would happen if the user entered "Abba Reunion News", for example, and thus would like to find a news article which is categorised by the keywords "Abba" and "reunion"? We have to disassemble this entry because none of our objects would match the entire string or at least not the article being searched for:

 

 

 

Our examples so far do not, however, fall short only due to multi word search inputs. We also often have search situations in which it does not make sense to regard the names or other character strings from the network, with which we compare the input, as blocks , e.g. because we would like to retrieve input in a longer text. In this case the wildcards will eventually no longer be an adequate means: if we also want to disassemble the input on the page of the object and the text attributes which have been searched through it would be better to use the full-text search.

 

 

If we want to view or search through longer texts word by word, e.g. description attributes we recommended the use of full-text index. What does something like that look like?

The full-text index records all terms/words which occur within a portfolio of texts so that i-views can quickly and easily look up where a particular word can be found in which texts (and in which part of the text).

"Texts", however, are not usually separate documents within i-views, but the character string attributes which have to be searched through. Their full-text indexing is a prerequisite for the fact that these attributes are offered in the search configuration.

Even full-text indexing concerns the deviations between the exact sequence of characters within the text and the text which is entered in the index and which can hence be retrieved accordingly. An example of this: a message from the German music scene:

In this example we find a small part of the filter and word demarcation operations which are typically used for setting up a full-text index:

Word demarcation / tokenizing: often in punctuation such as exclamation marks are placed directly on the last word of the sentence without a space in between. In the full-text index, however, we want to include the entry {tour}, not {tour!} – hardly anyone will search for the latter. For this purpose, when setting up the full-text index we have to be able to specify that certain characters do not belong to the word. The decision is not always so easy: In a character string such as "Cuddle-Classic" which occurs in a text we have to decide whether we want to include it as an entry in the full-text index or as {cuddle} and {classic}. In the first instance our message will then only be found if an exact search is made for "Cuddle-Classic" or, for example, "*uddle-c*", in the second instance for all "classic" searches.

What we will probably keep together in spite of the occurrence of punctuation, i.e. exclude from tokenizing, are abbreviations: when AC/DC come to Germany o.i.t. (only in transit) it is probably better to have the abbreviation in the index instead of the individual letters.

Filter: by using filter operations we can both modify words when they are included in the full-text index and also completely suppress their inclusion. Known: stop words, at this point we can maintain a list. Moreover, we probably do not want individual words (Bela B.) to be in the index like this – the likelihood of confusion is too great. Using other filters we can restore words to their basic forms or define replacement lists for individual characters (e.g. in order to eliminate accents). Other filters, in turn, clear the text of XML tags.

We can set all this in the admin tool under "index configuration". We can then assign these configurations (in the knowledge builder or in the admin tool) to the character string attributes. The index configuration is organised in such a manner that filtering can take place before the word demarcation and after the word demarcation.

The full-text search does not affect the wildcard automatism of the other queries but the user may, of course, provide his input with wildcards.

Search pipelines enable individual components to be combined to complex queries. Single components perform operations in the process, e.g.:

  • traversing the network and thus determining the weighting
  • performing structured queries and simple queries
  • compiling hit lists

Every query step produces a query output (usually a number of objects). This query output may, in turn, be used as input for the following components in the pipeline.


Example

Let us assume that songs and artists from our musical network are characterised with tags named 'moods'. Based on a certain 'mood' we now want to find which bands best represent this mood.

Step 1 of our search pipeline goes from a starting mood (in this case "aggressive") via the relation is mood of to the songs which are assigned to the mood 'aggressive':

 

In the second step we go from the number of songs detected in the 'mood' searched for to the corresponding bands via the relation has author:

Now we would like to pursue a second path: from the starting point 'mood' "aggressive" to the musical directions which are characterised by aggressiveness.

Based on this number of relevant musical directions we have to go to bands which are assigned to this mood. We go down this alternative path in one step using a structured query:

From the last two steps we give the indicator "musical direction" a somewhat lower weighting and compile the outputs at the end:

The steps are processed in sequence: the input and output define which step will continue to work with which hit list. For instance, in this manner we would be able to begin again with 'mood' on our alternative path.

 

The principle of weightings

It was the goal to give the bands we obtained as outputs a ranking which shows how great their semantic "proximity" is to the mood aggressive. In particular, we influence ranking in this search at two positions: right at the end we weight bands higher in the summary which are found both via their musical direction and their songs. In this case this applies to Linkin Park and the Sex Pistols. The higher ranking of Linkin Park results from the fact that again and again different songs lead to Linkin Park with the mood aggressive. Since more aggressive songs from Linkin Park are in the database, Linkin Park should be 'rewarded' with a higher ranking.

The individual components of a search pipeline are depicted in the main window in the box components in the order of sequence in which they are implemented.

Using the button add we can insert a new component at the end of the existing components.

Grouping with blocks serves only to provide an overview, e.g. for the compilation of several components in a functional area of the search pipeline.

The order of sequence of the steps can be changed using the button upwards and downwards or with drag & drop.

Using the button remove the component selected will be removed, to include all possible sub components. The configuration for the component selected is displayed on the right-hand side of the main window.

 

Configuration of a component

A selected component may be configured on the right-hand side of the main window using the tab "configuration": most components need input. This usually comes from a previous step. In this way, the first components in our example pass on the output under the variable "songs" to the next component, this then goes from there to the bands and, in turn, gives the output to the next steps as "bandsThroughSongs":

Using the input and output variable we can also, in later steps, re-set to the initial output which we saw in the last paragraph.

We define the input parameters as global settings for the search. Under the name which we assign here we can then access these inputs in our search pipeline during each step. In our example the input parameter for identifying typical bands is the mood.

Some components enable a deviation from the standard processing sequence:

Individual processing: elements of a quantity, e.g. hits from a search may be processed individually. This is practical if you want to assemble an individual environment of adjacent objects for search hits. In individual processing each element of the configured variable in the single hit is saved and implemented in the sub components.

Condition for set parameters: this component only carries out further sub components if predefined parameters have been set, whereby the value is insignificant. New sub components may be added by using the 'add' tab.

KPath condition: By using a KPath condition we can determine that the sub components may only then be implemented if a condition expressed in KPath is fulfilled. If the condition is not fulfilled the input will be adopted. KPath is described in the manual for KScript.

Output: we can stop the search at any stage and return the input. This component is also useful for testing the search pipeline.

The block components which we have also used in our example group a lot of individual steps. In order to maintain an overview in extensive configurations we can also change the name of the component using the tab "description" and add a comment as well. Neither the block components nor the description have any functional effects. Both of them only serve the 'legibility' of the search pipeline.

 

Test environment

Using the test environment in the menu we can analyse the functioning of the search. The upper section contains the search input and the lower section the output. The input may be a search text or an element from the knowledge network, depending on which required and optional input parameters we have globally defined in the search pipeline. If we wish to enter an element from the knowledge network as a starting point we select the corresponding parameter line and add an attribute value or a (knowledge network) element, depending on the type.

On the tab Trace search a report of the search will be displayed. This primarily consists of the configuration of the output variables and the duration of the implementation of each component. The log begins with the pre-configured variables (search string) as well as active users.

 

Calculation possibilities

In the case of some components it is possible to summarise several quality values into one single quality value – e.g. in "summarise hits" but also when traversing the relations (see example above). For this purpose the following methods of calculation are available:

  • addition / multiplication
  • arithmetic average / median
  • minimum / maximum
  • ranking

The option "ranking" is then always suitable when we want to assemble an overall picture from individual references, e.g. if we want to calculate many paths, at least partially independent paths – at the end still with differing lengths – to an "overall proximity". Using the ranking calculation we ensure that all positive references (all independent paths) keep increasing their similarity without exceeding 100%.

In the search pipeline quality values are always specified as floating point numbers. The value 1 thereby corresponds to a quality of 100%.

Weighted relations and attributes

Starting with semantic objects, we can traverse the graph in this step and collect relation targets or attributes. To do so, we have to specify the type of relation or attribute.

Please note: Only collected targets are output, rather than the initial set. If this is to be displayed, we then have to summarize the input and output hits.

When traversing a relation, the weighting of hits can be influenced. Let’s assume we want to semantically enhance the initial mood” of our example search with sub-moods”. But this indirection is to be reflected in a ranking: Connections to bands that run via sub-moods are not supposed to count as much as connections via an  initial mood. For this purpose, we can assign a fixed value – e.g. 0.5 – for moving along the relation and then merge input quality, e.g. multiply it. In this case the sub-moods added in this step count only half as much as direct moods.

Instead of assigning a fixed weight for moving along the relation, we could also read the value from a meta-property of the basic type float of the selected relation. If the attribute is not available or no attribute has been configured, the default value is used. The value should be between 0 and 1. The hit generation can be configured in detail: For relations, you have the option to also generate a new hit for the source of the relation (rather than for the relation target). 

If a relation has been selected as a property and hits are generated for relation targets, we can also transitively trace the relation. The quality value is reduced with each step until the value falls below the specified threshold. If an object has more relations than specified under maximum fan-out, these relations are not traced. The higher the damping factor, the more the quality value is reduced.

 

Structured query

We can use structured query components to either search for semantic objects/go from an existing set to other objects (as with the weighted relation) or filter a set.

If we search for objects, we forward our initial set of hits from a preceding step into the search via the parameter name. (In general: Within the expert query, variables of the search pipeline, e.g. search string, can be referenced via parameters.) In this case, the input stays blank.

 

For filtering, in contrast, we specify a set of objects as the input. The output contains all objects that meet the search condition. Objects that do not meet the search condition can optionally be stored in an additional variable (Rest).

We can either define the structured query ad hoc directly in the component or we can use an existing structured query.

Please note: If an existing search is selected, no copy is created. Any changes to the structured query that we make for search pipeline purposes also modify the query for all other uses.

 

Query

You can use the Query component to execute simple queries, full text queries and other search pipelines. Simple queries and full text queries receive a string here, e.g. the search string: This is a parameter that is available for processing user input in all search pipelines. The hit list of the called search fills the output of this component. 

By integrating search pipelines into other search pipelines, we can factorize sub-steps that occur more frequently. Several parameters and entire sets of hits (“hit collections”) can be transferred to other search pipelines. With integrated search pipelines we can also replace several parameters, that is, we can access of every sub-step output in the integrated search and vice versa. If we go to selected parameters, we can also rename them, for example, if we want to use a set of hits from the integrated search but have already used the name. Alternatively, we can also apply only some of the parameters from the integrated search in order to avoid such conflicts.

 

Summarize hits

We can use this component to summarize different sets of hits (“hit collections”) from previous steps. The following methods are available for summarizing:

Join: All hits that occur in at least one of the sets are output as a result

Intersect: Only hits that occur in all sets are output as the result.

With joins and intersects, a semantic object can occur in several sets of hits (“hit collections”) and has to be computed as one total hit with a new hit quality. The aforementioned calculation options are also available here.

Difference: One of the sets of hits (“hit collections”) must also be defined as the initial set. The other sets are deducted from this set.

Symmetric difference: The result set consists of objects that are included in exactly one subset (= everything except for the intersection, when there are two sets).

Three different types of total hits can be generated. The selection is particularly relevant if partial hits include additional information.

  • To generate uniform hits, remember the original hits as the cause: New hits are generated that contain the original hit as the cause.
  • Extend original hits: The original hit is copied and receives a new quality value. If there are several hits for the same semantic object, a random hit is selected.
  • Generate uniform hits: A new hit is generated. The properties of the original hit are lost.

 

Summarize partial hits

During individual processing you frequently have to general a total set from partial hits. The component “Summarize partial hits” enables you to do so. This summarizes all hits of one or more partial sets of hits (“hit collections”). The difference to Summarize hits is that summarizing only take parts at the end, not for every partial hit set. This is relevant in particular when calculating the quality because summarizing hits would return incorrect values, in particular for the median.

 

Script

A search pipeline can contain a script (JavaScript or KScript). This can access the variables of the search pipeline. Furthermore, a script can transfer several parameters to the search pipeline. The result of the script is used as the result of the component.

JavaScript API and KScript are described in separate manuals.

 

Copy quality from attribute value

For hits, we can copy the quality value from an attribute of the semantic object. If the object does not have exactly such an attribute, the default value is used. The value should be between 0 and 1. 

 

Compute total quality from weighted qualities

To adapt the quality of a search hit, it can be useful to compute a total value from individual partial qualities. The qualities must be available as numeric values. These values are used to calculate a new total quality.

 

Compute overall quality of hits

You can use the individual quality values of a set of hits to compute a total quality.

 

Restrict quality

We can restrict sets of hits (“hit collections”) to hits whose quality value falls within specified limits (minimum or maximum). Normally, we want to filter out hits that fall below a certain quality threshold.

 

Restrict number of hits

If the total number of a set of hits is to be restricted, we can add the component restrict number of hits. We can use the option Do not split hits of the same quality to prevent a random selection in case of several hits of the same quality in order to comply with the total number. We then get more hits than specified.

If some very specific cases, we can also randomly select the hits, e.g. if we have a large number of hits with the same quality and want to generate a preview.

 

Scale quality

Die quality values of a set of hits can be scaled. A new set of hits with scaled quality values is calculated. The calculation takes place in two steps:

  1. Die quality value of the hits are limited. The threshold values can either be specified or calculated. The calculation determines the minimum and maximum value of the hits. If thresholds are specified and a hit has a quality value that falls outside of the thresholds, the value is limited to the threshold value. If you want to remove such hits, you have to execute the restrict quality component first. Example: Mapping percentage values to school grades. 30% is average, over 90% is high score. The values can be scaled linearly from 30% to 90%.
  2. Following that, the quality values are scaled linearly. Hits with the minimum/maximum input value receive the minimum/maximum scaled value.

 

Compute hit quality

You can use a KPath expression to generate a new hit with calculated quality for a hit. The KPath expression is calculated on the basis of the input.

The “Hit” type content model is available to ensure that search queries can be processed and transported both as quality and causes. A “Hit” can be seen as a container that summarizes the element including several properties and makes it temporarily available to the context. The contained properties can be, for example, calculated hit quality, hit cause, change log entry etc.

In search pipelines, the content models “Hit” and “Hits” are available. The “Hits” type is an array of several “Hit” elements:

 

Meta-attributes of hits

In addition to the semantic element, the following meta-attributes are transported in a hit:

  • Hit quality: Can have a value between 0 and 1 by setting a quality in a search pipeline; the hits of a structured query receive the value 1 by default
  • Hit cause: Refers to the input element that has led to the hit and its type
  • Hit cause (snippet): Refers to the content or the search term that has led to the hit

For detailed information on the meta-attributes, refer to the JavaScript API.


Using hits in search pipelines

If a hit list is to be processed in a search pipeline by means of a simple query, individual processing is required because the hit list is in the form of an array: Queries can process an individual “hit” in the form of a string but not “hits” (= array). Converting a “hit” into string, in turn, can be done using a script that precedes the simple query.

Example script for converting a hit into a string: 

function search(input, inputVariables, outputVariables) {

  return input.element().name();

}


Using hits in tables

The “Use hits” option is available in the column element configuration of a table. This option determines whether the entire hit element (semantic element + meta-attributes) or only the semantic element is to be forwarded to display query results.


Processing hits in tables via a script

If the query results are to be processed further using a script, the “Use hits” option determines whether the query result is supposed to be treated as a hit: The script is forwarded either $k.SemanticElement or $k.Hit as a JavaScript object.

With the exception of the structured queries which are created in the folders and also implemented there, all searches in the header of the knowledge builder are made available for internal usage.

For this purpose we have to drag & drop a pre-configured search only into the search box of the header of the knowledge builder. If this contains several searches to be selected from you can select the desired search from the pull-down menu by clicking on the magnifier icon. The search input box always contains the search mode which was last carried out.

We can remove the search using the global settings where we can also change the sequence of the various searches in the menu.

 

 

The full-text search may also alternatively be carried out via the external indexer Lucene. The search configuration is then analogue to the standard full-text search, i.e. attributes may, in turn, be configured in the search which are also connected to the Lucene index; the search process is also analogue. In order to configure the Lucene indexer connection we hereby refer you to the corresponding chapter in the admin manual.

 

 Regular expressions are a powerful means of searching through databases for complex search expressions, depending on the task concerned.

Search with regular expressions

hit

The [CF]all

the call, the fall

Car.

cars

Car.*

cars, caravans, Carmen, etc.

[^R]oom

doom, loom, etc. (but not room)

 

As search inputs, i-views supports the standard also known from the standard known from Perl which, for example, is described in the Wikipedia article for regular expression.

 

The search in folders is carried out in names of folders and their contents:

  • folders whose name matches the search input
  • fodlers which contain objects which match the search input
  • expert searches which contains elements which match the search input
  • scripts in which the search input appears
  • rights and trigger definitions which contain elements which match the search input

Using the search input #obsolete, you can target your search for deleted objects (e.g. searching in rights and triggers). When configuring the search the number of folders to be searched through can be limited. Furthermore, the option "search for object names in folders" may be deactivated. This is helpful if you do not want to search for semantic objects in folders because in the case of extensive fodlers (e.g. saved search results) the search for object names may take a very long time.

Along with the objects and their properties, we also build a variety of other elements in a typical project: we define, for example, queries and imports/exports, or write scripts for specific functions. Everything that we build and configure can be organized in folders.

The folders are shared with everyone else working on the project. If we do not wish to do so, we can file things in the private folder, for example for test purposes. This is only visible for the respective user.

A special form of the folder is the collection of semantic objects, in which we can file objects manually, for example for processing at a later date. To do so, we simply move them to the folders using Drag&Drop, and there are also operations to, for example, define result lists in folders. The collection of semantic objects only keeps references to the objects: The moment we delete one of these objects, it is also deleted from the collection. In the case of collections of semantic objects with more than 100 entries, for reasons of performance, no determination of the table configuration that best suits the content occurs. We can, however, request this by means of the context menu function “Determine configuration of the object list” when necessary.

 

Registration

Queries, scripts, etc. can call each other (a query can be integrated into another query or into a script, while, in turn, a script can be called from a search pipeline). There are registration keys for this purpose, with which we can equip queries, import/export mappings, scripts and even collections of semantic objects and organizing folders to ensure they provide other configurations with a functionality. The registration key  must be distinct. Everything that has a registration key is automatically added to the Registered objects folder, or in the subfolder that corresponds to its type

 

Shift, copy, delete

Let us assume we have a folder called Playlist functions in our project. This might contain an export, some scripts and a structured query similar songs, which we would like to use in a REST service. The moment we give the structured query a registration key, it is added to the folder Registered objects” (“Technical” section). This means the structured query “similar songs appears in the folder Registered object under Query. It also remains there when we remove it from our project subfolder Playlist functions. If we remove the registration key, the query will automatically disappear from the registry.

The basic principle when deleting or removing: Queries, imports, scripts can be in one or several folders at the same time, and at least one folder must contain them. Only when we, for example, remove our query from the last folder will it actually be deleted. Only then does i-views also request a confirmation of the delete action. The same applies for removal of the registration key.

If we wish to delete the query in one step, regardless of the number of folders that contain it, we can only do this from the registry.

 

Folder settings

We can define quantitative limits for query results, folders and object lists (lists of the specific objects in the main window of the Knowledge Builder when an object type is selected on the left-hand side) in the folder settings. Automatic query up to the number of objects specifies up to which number of objects the contents of the folders or the object lists are shown without any further interaction by the user. If the limit set there is exceeded, the list initially remains empty, and the message Query not executed appears in the status bar. Executing a search without an input in the input line shows all objects. This, at least, until the second limit has been reached: Maximum number of query outputs, maximum number of outputs in object lists – in this instance high values – there is actually no result when these values are exceeded, queries must be restricted, e.g. in object lists in which we also have the beginning of the name in the input box.

By mapping data sources we can import data to i-views from structured sources and export objects and their properties in structured form. The sources can be Excel/CSV tables, databases or XML structures.

The functions for import and export overlap to the most part and are therefore all available in a single editor. In order to access functions for import and export, it is first necessary to select a folder (e.g. the working folder). There the “New mapping of a data source” button can be used to select a data source for the import or export.

Alternatively, you can find the button on the “TECHNICAL” tab under “Registered objects” -> “Mappings of data sources”.

The following interfaces and file formats are available for import and export:

  • CSV/Excel file
  • XML file
  • MySQL interface
  • ODBC interface
  • Oracle interface
  • PostgreSQL interface
  • For the exchange of user IDs, a standard LDAP interface has been implemented.

The following section uses a CSV file to describe how to create a table-oriented import/export. As all imports/exports apart from XML imports/exports are table-oriented and the individual data sources differ only in terms of their configuration, the example for the mapping of the CSV file can also be applied to the mapping of other databases and file formats.

CSV files are the default exchange format for spreadsheet applications such as Excel. CSV files consist of individual rows of plain text in which columns are separated by a fixed, predefined character such as a semicolon.

Let’s use a table with songs as a first example: When the table is imported, we would like to create a new, specific object of the type song for each line. The contents of columns B to G become attributes of the song, or relations to other objects:

Using the song as a basis, we build up the structure of attributes, relations and target objects that should be created by the import (left-hand side). An object of type song is created this way for row 18, for example, with the following attributes and relations:

 

We can, however, also decide to distribute the information from the table in a different way, for example allocate the year of release and artist to the album, and in turn the genre to the artist. A row still forms a context, however this does not mean it must belong to exactly one object:

 

Everywhere that we build up new, specific objects and relation targets in our example, we must always specify at least one attribute for this object, in this case the respective name attribute that allows us to identify the corresponding object.

Once we have selected the “New mapping of a data source” button, a dialog opens which we must use to specify the type of data source and the mapping name. If we have already registered the data source in the semantic graph database, then we will now find it in the selection menu at the bottom.

By pressing “OK” as confirmation, the editor for the import and export opens. We can specify the path of the file we wish to import under “Import file”. Alternatively, we can also select the file using the button to the right of it. As soon as the file has been selected, the column headings and their positions in the table are exported and shown in the field at the bottom right. The “Read from data source” button can read out the columns again in the event of any changes to the data source. The column “Mappings” shows us the respective attribute to which the respective column of the table is mapped later on.

The structure of our example table corresponds to the full default settings, so that there is nothing else to factor in under the menu item Options. CSV files can, however, exhibit structures that are very different, which must be factored in using the following setting options:

Encoding: The character encoding of the import file is defined here. This provides ascii, ISO-8859-1, ISO-8859-15, UCS-2, UTF-16, UTF-8 and Windows-1252 for selection. If nothing has been selected, the default setting that corresponds to the operating system in use is applied.

Line separator: In most cases, the setting “detect automatically”, which is also selected by default, is sufficient. However, should the user establish that line breaks are not being identified correctly, then the corresponding, correct setting should be selected manually. This provides CR (carriage return), LF (line feed), CR-LF and None for selection. The standard used to encode the line break in a text file is LF for Unix, Linux, Android, Mac OS X, AmigaOS, BSD and others, CR-LF for Windows, DOS, OS/2, CP/M and TOS (Atari), and CR for Mac OS up to Version 9, Apple II and C64.

1st line is heading: It may the case that the first line does not include a heading, and the system must be notified of this by removing the checkmark set by default next to “1st line is heading”.

Values in cells are surrounded by quotation marks is selected so that the quotation marks are not included in the import when this is not wanted.

Identify columns: Whether the columns are identified using their heading, the position or the character position must be specified, as otherwise the table cannot be captured correctly.

Separator: If a different separator than the default semicolon is used, this must also be specified when the column is not identified using the character position.

Moreover, the following rules apply: If a value in the table contains the separator or a line break, the value must be placed in double quotation marks. If the value contains one quotation mark, this must be doubled (»“”«).

We will now start setting up the target structure that should be produced in the semantic graph database. In our example, we are starting with object mapping of the songs. In order to map a new object, we must press the “New object mapping” button.

The next step is to specify the type of object for import.

There are further specific settings in the options tab of the object mapping.

With objects of all subtypes: If the checkbox is set to "With objects of all subtypes", the import also includes objects from all subtypes of "Song". Since this is usually desired, the checkmark is set here by default.

Exact type is specified by the following mapping: If the exact type to which the object is to be created is identified in the import source, this can be mapped here via the "New..." button. It must be a subtype of the type specified in the tab "Mapping".

Allow multiple objects: It is possible that the knowledge network already contains several objects with correspondent identifying properties (correspondent names). If the import mapping needs to be referred to these objects, an ambiguity conflict occurs. If you set the checkmark here, the import for all these objects is going to be performed disregarding the ambiguity.

If you do not set the checkmark, the import will not be carried out for the multiple occuring objects and instead the user will be informed that the importer cannot uniquely identify the object.

Now we want to link the information in the table to the object mapping of the songs. Attributes for individual songs are represented along with relations. In order to first create the track name for a song in the mapping, we add an attribute to the object mapping for song. Clicking on the “New attribute mapping“ button opens a dialog, which must be used to select the relevant column from the table to be imported.


As this attribute is the first one we created for the object mapping of songs, it is then automatically mapped to the name of the object, as the name is usually the most commonly used attribute.

The first attribute created for an object is also used automatically in the identification of the object

An object must be identified by at least one attribute – by its name or its ID, or by a combination of multiple attributes (as with the first and last name and date of birth of a person) – it should already exist so that it can be unambiguously found in the semantic graph database. This prevents unwanted duplicates from being created during import.

In the “Identify“ tab it is possible to subsequently change the attribute identifying the object, or to add multiple attributes. In addition, it is possible to specify whether the values should be matched in a case-sensitive fashion, and the query should return identical values (without index filter / wildcards). The latter is relevant if filters or wildcards are defined in the index that specify, for example, that a hyphen should be omitted from the index. The term would not be found with a hyphen if the search took place only via the index; in this case, a checkmark would be needed here so the search only finds the exactly identical value.

Now we can add further attributes to object mapping that do not need to contribute towards identification, e.g. the length of a song – and this is once again done via the “New attribute mapping” button. (Please note: first the object mapping “objects of song” must be selected again.) Now we select the “Length” column from the table to be imported. This time we have to manually select the attribute to be mapped to the “Length” column. The field on the bottom right contains the list of all possible attributes defined in the schema that are available to us for objects of the “song” type, among them also the “length” attribute.

 

Next, we want to map the album on which the song is located. Since albums are concreate objects in the semantic graph database, we need the relation that connects the song and the album to do this. To map a relation, we first select the object for which the relation is defined and then click on the button “Map new relation.”

Following that, just like for attributes, we get a list of all possible relations; and the required relation “is included in” is naturally included.

In the next step, we now have to define where in this table the target objects come from. A new object mapping is required for the target; this is created using the “New” button. If the type of the target object is uniquely identified in the schema, it is copied automatically. If not, a list of possible object types appears.

For new object mappings, we then once again have to select the attribute that identifies the target object etc. This creates the target structure of the import.

 

Types can also be imported and exported. Let’s assume we want to import the genres of songs as types.

To map a new type, we choose the “New type mapping” button.

Following this, we have to specify the super-type of the new types to be created, in our example, the super-type would be “Song:”

Following that, we have to specify from which column of the imported table the name of our new types is to be taken:

Following that, we still have to specify on the “Import” tab that our new types are not supposed to be abstract:

If we now want to assign the corresponding songs to their new types, we have to use the system relation “has object.” In older versions of i-views this relation is called “has individual.” As the target we chose all objects of song (incl. subtypes), which are defined via the Name attributes in accordance with the Song title column.

If we now import this mapping, we get the desired result. The songs that already exist in the semantic graph database are taken into account by the import setting “Update or create if not found” and moved under their respective type so that no object is created twice (see chapter Import behavior settings). A quick reminder: A specific object cannot belong to several types at once.

There is another special case. If we have a table in which different types occur in one column, we can also map this in our import settings.

To do this, we count the mappings of objects to which we want to assign subtypes (in this case “objects of location”) and then select the corresponding super-type on the “Options” tab.

It is also important not to forget to specify on the “Import” tab that the type is not supposed to be abstract so that concrete objects can be created.

Careful: Assuming Liverpool already exists in the semantic network but is assigned to the type “Location” because it did not have subtypes such as “City” and “Country” at that time. In this case, Liverpool is not created anew under the type City. Reason: The objects of the Location type are only identified via the name attribute and not via the subtype.

Extensions can also be imported and exported. Let’s assume we have a table that shows the role of a band member in a band:

Ron Wood is a guitarist with the Faces and the Rolling Stones, but a bassist with the Jeff Beck Group. In order to map this, we must select the object for which an extension was defined in the schema and then press the “New extension mapping” button.

Like an object mapping, an extension mapping queries the corresponding type. In the schema of the music network, the “Role” type is an abstract type. So it is necessary to define in the mapping that the role is to be mapped to subtypes of the “Role” type (see Type mapping chapter).

As with objects and types, the relation can be mapped to the extension (or to the subtypes of an extension).

The script mapping can only be used upon export. The script can be written in either JavaScript or KScript.

The script mapping is, for example, used when we wish to combine three attributes from the semantic graph database to form an ID. However, this may make the export slower. (In the case of an import, this could be mapped using a virtual property more easily. The use of virtual properties is explained the chapter Table Columns.)

The following case is another example of the use of a script in the case of an export. It shows how several properties can be written into a cell with a separator. In this case, we wish to generate a table which lists the song names in the first column and all moods for the songs separated by commas:

To generate the second column, we require the following script:

function exportValueOf(element)
{

	var mood = "";
	var relTargets = $k.Registry.query(“moodsForSongs").findElements({songName: element.attributeValue(“objectName")});
	if(relTargets && relTargets.length > 0){
		for(var i = 0; i < (relTargets.length-1); i++){
			mood += relTargets[i].attributeValue(“objectName") + ", ";
		}
		mood += relTargets[relTargets.length-1].attributeValue(“objectName"); 
	}
	return mood;	
}

The script contains the following structured query (registration key: “mood ForSongs"):

The expression “findElements” allows us to access a parameter (in this case “songName”) within the query. The “objectName” is the internal name of the name attribute in this semantic model.

Within the if-instruction we state that when an element has several relation targets, these should be shown separated by a comma. After the last relation target that runs through the loop, there should no longer be a comma. Even when an element only has one relation target, this is shown without a comma accordingly.

The result is a list of songs with all their moods, which appear separated by a comma in the second column in the table:

If several values are specified for an object type when there is an object (in our example, there are several Moods for each song), then there are three possible ways the table will look. For two of the three possible ways, the import must be modified, which is described in the following.

Option 1 – Values separated by separators: The individual values are found in a cell and are separated by a separator (e.g. a comma).

In this case, we go to the mapping of the data source, where the general settings are found, and to the “Options” tab found there. The setting used to specify separators within a cell is found here in the lower section. We now only have to locate the corresponding column of the table to be imported (“Mood”) and enter the separator used (“,”) in the column “Separator”.

 

Option 2 – Several columns: The individual values are located in their own respective column, whereby not every field must be filled in. As many columns are required as the maximum number of moods there are per song.

In this case, the corresponding relation must be created the same number of times as there are columns. In this case, the first relation must, accordingly, be mapped to “Mood1”, the second relation to “Mood2” and the third relation to “Mood3”.

 

Option 3 – Several rows: The individual values are located in their own respective row. Please note: In this case, it is essential that the attributes that are required for identification of the object (in this case the track name) appear in every row, as otherwise the rows would be interpreted as their own respective object without a name, making a correct import impossible.

In this case, no special import settings are required, as the system identifies the object using the identifying attribute and creates the relations correctly.

During the import process, a check is always performed to determine whether an attribute already exists. Identify“ infers concrete objects from attributes. When we refer below to existing attributes, these are attributes whose value precisely matches the value in the column to which they are mapped. When we refer to existing objects, we mean concrete objects that have been identified through an existing attribute.

Example: If our network already contains a song called Eleanor Rigby, the name attribute (mapped to the track name column in our import table) is an existing attribute, so the song is an existing song as long as the song is identified only via the name attribute.

The settings for import behavior allow us to control how the import should react to existing and new semantic elements. The following table shows a brief description of the individual settings, while the sub-chapters of this chapter contain detailed and descriptive explanations.

Setting Brief description
Update

Existing elements are overwritten (updated), no new elements are created.

Update or create if not found Existing elements are overwritten; if none exist, they are created.
Delete all with same value (only available for properties)

All attribute values that match the imported value are deleted for the respective objects.

Delete all with same type All attribute values of the selected type are deleted for the relevant objects, regardless of the values match or not.
Delete

Is used to delete that exact element.

Create Creates a new property/object irrespective of whether the attribute value or the object already exists.
Create if type not found (only available for attributes) An attribute of the required type is only created if none of this type exists.
Create if value not found (only available for attributes) An attribute with this value is only created, if none with this value exists.
Do not import No import.
Synchronize In order to synchronize the contents for import with the contents in the database, this action creates all elements that do not yet exist, updates all elements that have changed, and delete all elements that no longer exist.

During an import, we have to decide individually for every mapped object, every mapped relation and every mapped attribute which import settings we want to use.

Note: Unlike in other editors of the Knowledge Builder, a setting is neither “inherited” by the subordinate mapping elements, nor is the import setting for an object “inherited” by its attributes.

If this setting is applied to an attribute, it ensures that the value from the table overwrites the attribute value of exactly one existing attribute. No new attributes are created with this setting. If the object has more than one attribute value of the selected type, no value is imported.

If you use the “Update” setting for an identifying attribute while using the “Update or create if not available” setting for a corresponding object, the error message “Attribute not found” appears, if the identifying object is not available in i-views.

If “Update” is applied to an object, this setting ensures that all properties of the object can be added or changed by the import. New objects are not created.

Example: Let’s assume we keep a database of our favorite songs. We have just received a list with songs that contain new information. We want to get this information into our database but prevent songs that are not our favorite songs from being imported. We use the “Update” setting to do this.


The song "About A Girl" is already available in the Knowledge Builder.

The import table contains information on the length, rating and creator of the song.

For Song objects we specify that they are supposed to be updated. All attributes, relations and relational targets receive the import setting “Update or create if not available yet.”

The result: The song has been updated and has received new attributes and relations. Already existing properties have been updated (value).

This import setting is required in most cases and is therefore set as the default setting. If elements already exist they will be updated. If elements do not exist yet they are created in the database.

This import setting is only available for properties (relations and attributes) and is only used when the import setting “Delete” cannot be used for deleting. “Delete” does not function for deleting when a relation or an attribute occurs on an object several times with the same value. If an attempt is made nonetheless, an error message appears. For example, the song “About A Girl” may have been linked to the band “Nirvana” using the relation “has author” by mistake.

In cases like this, the import setting “Delete” does not have an affect, because due to multiple occurrences, it does not know which relations it is supposed to delete. In this case, “Delete all with the same value” must be used.

This import setting is used if all attributes, objects or relations of a type are supposed to be deleted, irrespective of existing values. In contrast to this, the settings “Delete” and “Delete all with identical value” take the existing values into account. Only the elements of those objects that occur in the import table are deleted.

Example: We have an import table with songs and the duration of the songs. We see that the duration differs in many cases and decide to delete the duration for these songs to make sure we do not have any incorrect information.

For most songs, the duration in the import table differs...

 

... from the duration of the songs in the database.

For the attribute “Duration" we use the import setting “Delete all of the same type.”

After the import, all attribute values of the attribute type duration have been deleted for these 4 songs.

The import setting “Delete” is used to delete exactly the one object/ exactly the one relation/exactly the one attribute value. If none or several objects/relations/attribute values match the elements for import, an error message about this appears and the elements concerned is not deleted.

This import setting creates a new property/a new object irrespective of whether the attribute value or the object already exists. Sole exception: If a property may only occur once (observe the setting “May have multiple occurrences” for the attribute definition), then the new attribute is not created and an error message appears noting this.

This import setting is only available for attributes. A new attribute value is only created when the corresponding attribute does not yet have a value. The values do not have to be the same; what matters is that one value or another exists, or does not exist, for the corresponding attribute type. The simultaneous import of several attribute values to one attribute type is not possible, as in this case it is not possible to decide which of the attribute values should be used.

Example: Assuming that we have an import table that contains the musicians with their alias names. A number of musicians also have several alias names. In this case, we cannot use the setting “Create type if not found,” because then all musicians with several alias names would not be given one.

This import setting is only available for attributes. A new attribute value is only created if the object does not yet have this value for the corresponding attribute. 

Example: Let's take again the import table that includes musicians wih their alias names. Here we can use the setting "Create value if not found", because then the musicians with several alias names can get all these alias names.

The import setting “Do not import” allows us to specify that an object or a property should not be imported. This is useful when a mapping has already been defined and we want to use it again, however do not want to import specific objects and properties again.

The import setting “Synchronize” should be used with caution, because it is the only import setting that not only affects the objects and properties in i-views that have values that match those in the import table, but also extends to all elements of the same type in i-views. When an import table is synchronized with i-views, in principle this means that after the import, the result should look exactly the same as it does in the table.

If objects of one type are synchronized, all objects of this type that are not in the import table are deleted. The objects that exist are updated and the objects that are not in i-views are created as new objects.

Example: We would like to synchronize the music fairs in i-views (at the left) with a table with the fairs and their date (at the right):

For objects of the “Fair” type, we select the import setting “Synchronize;” for the individual attributes Name and Date of fair the import setting “Update or create if not found” is used:

The attribute name is the identifiable attribute of fair. There is no name for the object Music fair 2015 in the import table. If we import the table this way, an error message is output:

After the import, we now see that the import caused two objects to be omitted that did not have a counterpart in the import table. The date was updated for Music fair 2016:

When attributes are synchronized, the following applies: When an existing attribute is not given a value by an import, it is deleted for the corresponding object of the import table. If the existing attribute has a different value to the import table, it is updated, even when it is allowed to occur several times. If the attribute does not yet exist, a new one is created.

When relations are synchronized, and they are not given a value, they are deleted for the corresponding object. If the existing relation has a different value to the import table, it is updated. If the target object does not yet exist in the database, a new one is created, provided that a corresponding import setting has been assigned to the target object. If the target object cannot be created as a new one, because, for example, the import setting “Update” was assigned, an error message appears notifying us that the target object was not found and will not be created.

When it comes to mapping database queries, the columns that are available for import are specified by the database tables and/or the Select statement. When mapping files, it is possible adopt the columns with the “Read from data source” button from the file. But you can also specify them manually. In that case you can choose whether to create a standard column or a virtual property.

If you want to export from the semantic graph database you have to enter the columns manually. You can export only standard columns, not virtual columns.

Virtual table column / virtual property
Virtual columns are additional columns that allow you to use regular expressions to transform the contents we find in a column of the table to be imported. Example: Let’s assume that “a.d.” is appended to all the years in our import table. We can correct this by creating a virtual column that adopts only the first 4 characters from the year column.

We can also define virtual properties during export.

We simply write the regular expressions into the column header (into the name of the column). During the process, partial strings enclosed in pointy brackets <...> are replaced according to the following rules, with n, n1, n2, ... representing the contents of other table columns with the column number n.

Expression Description Example Input Output
<np> Print output of content of column n

Hits: <1p>

 

1 (integer)

 

‘none’
(string)

Hits: 1

 

 

Hits: ‘none’

<ns> Output of string in column n Hello <1s>! 'Peter' Hello Peter!
<nu> Output of string in column n in upper case Hello <1u>! 'Peter' Hello PETER!
<nl> Output of string in column n in lower case Hello <1l>! 'Peter' Hello peter!
<ncstart-stop> Partial string from position start to stop from column n

<1c3-6>

 

<1c3>

 

<1c3->

‘Columns’

olum

 

umn

 

lumns

<nmregex> Test whether the content of column n matches the regex regular expression. The following expressions are only evaluated if the regular expression applies.

<1m0[0-9]>hi

 

 

 

<1m$>test

01

 

123

 

(blank)

 

123

hi

 

(blank)

 

test

 

(blank)

<nxregex> Test whether the content of column n matches the regex regular expression. The following expressions are only evaluated if the regular expression does not apply. <1x0[0-9]>hello

01

 

123

(blank)

 

hello

 

 

 

<neregex> Selects all hits for regex from the contents of column n. Individual hits are separated by commas in the result.

<1eL+>

 

 

<1e\d\d\d\d>

HELLO

WORLD

 

02.10.2001

LL,L

 

 

2001

<nrregex> Removes all hits for regex from the contents of column n <1rL> HELLO WORLD HEO WORD
<ngregex> Transmits the contents of all groups of the regular expression <1g\+(\d+)\-> +42-13 42
<nfformat> Formats numbers, date and time specifications from column n according to the ‘format’ format specification

<1f#,0.00>

 

 

 

<1fd/m/y>

 

 

<1fdd/mmm>

3.1412

 

1234.5

 

1 May 1935

 

1 May 1935    

3.14

 

1234.50

 

1/5/1935

 

 

01/May

Table columns can also be referenced independently from their column number by using specially defined identifiers. The advantage in this case is that the allocation is not lost if the column order is changed in the import table.

The identifier for the relevant column of the import table is entered in the column with the heading Identifier in the column definition table. These columns are referenced by creating a virtual table column that is given the identifier as its table column heading (see example 2).

ExpressionDescriptionExampleInputOutput
<$name$regex>

Reference to a column by means of a unique column identifier name and subsequent transformation by means of the regex regular expression.

The $ characters are a functional component of the identifier syntax.

<$Name$u>'Company #1'COMPANY #1

 

Example 1: Use of regular expressions (reference via column number)

Let’s assume we have an import table containing concrete objects without a name. However, we want these objects to be modeled as separate objects in our data model. Example: for a load point, column 88 contains its main value, which is torque. So we enter the expression load point <88s> as the definition of our virtual column that will represent the name of this load point. The resulting name for a load point with a torque of 850 would therefore be “load point 850”.

We can also use the virtual property to create a username consisting of the first 4 letters of the first name and the last name. If the person is named Maximilian Mustermann and we define the virtual column with the relevant expression <1c1-4><2c1-4>, the result is “MaxiMust”.

The virtual property can also be used to create an initial password for a user during import. The expression could be Pass4<2s>. The resulting password for Maximilian Mustermann would be “Pass4Mustermann”.

A rather extensive example shows how the virtual property can be used to assign objects to the correct direct top-level group:

The three right columns are virtual columns.

<1mUG>: The number of the top-level group of the object is only written to the first of the virtual columns if the term “UG” (for Untergruppe (sub-group)) occurs in the first column for the object.

<2c1-3>000: The number to be written to the column consists of the first three characters of the second column and three zeros.

<1m>: Only if the first column for the object is empty, i.e. contains no value, is the number of the top-level group of the object written to the column.

<2c1-4>00: The number to be written to the column consists of the first four characters of the second column.

Heimtextil 2016: This expression (the German term for home textiles) is written to the column for all objects.

Example 2: Use of individual identifiers (in combination with regular expressions)

In the following example, the contents for the Company column are transformed into upper-case letters by means of virtual columns: Column 5 uses one reference per column number, column 6 uses one reference per column identifier.

 

Click on the preview to view the transformed column entries:

The following figure shows the effect of swapped columns in an import table: If only regular expressions (<1u>) are used, the wrong column is transformed; if an identifier with a downstream regular expression (<$Comp$u>) is used, the content remains the same.

Databases

The database, user and password must be specified in the mapping for a PostgreSQL, Oracle or ODBC interface.

Database specification

The database specification consists of the name of the host, the port, and the name of the database. The syntax is:

Database system Database specification
PostgreSQL hostname:port_database
Oracle //hostname:[port][/databaseService]
ODBC Name of the configured data source
MySQL

Separate configuration of database and host name 

Configure user name and password

The user name and password are specified as stored in the database. Under the Table option it is possible to specify the table to be imported. However, for import there is also the option of going to the “Query” option and formulating a query that specifies which data are to be imported.

Encoding

In case of PostgreSQL mapping, it is possible to specify the encoding on the “Encoding” tab.

Special requirements of the Oracle interface

The function for direct import from an Oracle database requires that certain runtime libraries are installed on the computer performing the import.

What is required directly is the “Oracle Call Interface” (OCI), and it is required in a version that, according to Oracle, matches the database server to be addressed. That means that the OCI in version 11 must be installed on the importing computer in order to address an Oracle 11i database. The easiest way to install the OCI is to install the “Oracle Database Instant Client”. The “Basic” package version is sufficient. The client can be obtained from the company operating the server, or from Oracle after registering at http://www.oracle.com/technology/tech/oci/index.html.

After the installation, it must be ensured that the library can be found by the importing client, either by placing it in the same directory or by defining environment variables that match the relevant operating system (documented for the OCI).

Depending on the operating system on which the import will be executed, further libraries are necessary, and these are not always installed.

  • MS Windows: next to the required “oci.dll”, two further libraries are required: advapi32.dll (extended Windows 32 Base-API) and mscvr71.dll (Microsoft C Runtime Library)

Apart from the XML import/export, all imports/exports are table-based and differ only in terms of the configuration of the source. For a description of a table-oriented display, you can consult the Example of the CSV file.

The principle of XML files is to make the different details for a record explicit by means of tags (<>) (and not by means of table columns). Accordingly, tags are also the basis for display when XML structures are imported to i-views.

Example: Let’s assume that our list of songs is available as an XML file:

<?xml version="1.0" encoding="ISO-8859-1"?>
<Contents>
    <Album type="Oldie">
        <Title>Revolver</Title>
        <Song nr="1">
            <Title>Eleanor Rigby</Title>
            <lengthSec>127</lengthSec>
            <Artist>The Beatles</Artist>
            <Topic>Mental illness</Topic>
            <Mood>Dreamy</Mood>
            <Mood>Reflective</Mood>
        </Song>
        [...]
    </Album>
    [...]
</Contents>

If we want to import this XML file, we choose the “XML file” data source when selecting the type, which causes the editor for the import and export of XML files to open. Even the specification of the file location is different than in the editor for CSV files. We can now choose between a local file path and specification of a URI.

JSON preprocessing makes it possible to convert a JSON file to XML before the actual import.

You can choose Transform with XSTL if you want to convert the XML data from the selected XML file to different XML data before the import, for example in order to change the structure or further separate individual values. Use the “Edit” button to open the XML file, where you can then define the changes by means of XSLT.

Once the file has been selected, use the “Read from data source” button to read out the XML structure, which is then displayed in the right-hand window.

We want to import the individual songs on our list. So we create a new object mapping and use the “Map to” button to select the <Song> tag. In contrast to a CSV import, where only the attribute values have an equivalent in the CSV table and where an individual row represents an object, which means that only the attribute values need to be mapped, semantic objects are here mapped by the XML structure. Therefore a corresponding tag of the XML file must be specified for each of the objects to be mapped.

As our example shows, the tags are not always unambiguous without context: <Title> is used for both album titles and song titles. The object type only becomes clear in combination with the surrounding tag. Often the context of the XML structure and the context of the mapping hierarchy are synchronous: As we have already specified that the objects should be mapped to the <Song> tag, the XML structure makes clear which <Title> tag we actually mean when we map <Title> with the name attribute of songs. Where the mapping hierarchy and the tag structure are not parallel, we can use XPath to form strings in the XML import in addition to the tags occurring in the XML file.

As with the CSV import, it is necessary to use the “Identify” tab to specify for object mapping which attribute values should be used to identify the object in the semantic graph database. The first created attribute for an object is once again used automatically as the identifying attribute.

Options with XPath expressions

Let’s assume we only want to import songs from albums with the “Oldie” music style. In our XML document, the information for the music style is specified directly in the album tag under type="...". That means we have to use the editor to define an XPath expression describing the path in the XML document that contains only songs from oldie albums. The right-hand lower section of the editor contains a field for adding XPath expressions.

The correct XPath expression is:

//Album[@type="Oldie"]/Song

Explanation in detail: 

//AlbumSelects all albums; their position in the document is irrelevant.
Album[@type="Oldie"]Selects all albums of the “Oldie” type
Album/SongSelects all songs that are sub-elements of albums.

We can now use this expression to define an equivalent for the object mapping of songs.

XPath also offers many other useful selection functions. We can, for example, select elements by their position in the document, use comparative operators, and specify alternative paths.

In the “Options” tab, the following functions are available for selection irrespective of the data source:

Import in one transaction: This is slower than an import with several transactions and should only be used if a conflict would occur during an import with several transactions because many people are working in Knowledge Builder at the same time or because you want to import data where it matters that individual pieces of data are not viewed separately from each other. Example 1: Every hour, an import is executed with the machine load status. The combined load values must not exceed a certain value as that could result in a power failure. To ensure this rule can be taken into account (e.g. by means of a script), all values must be viewed jointly and then imported. Example 2: An import is executed with persons of which no more than one person may have a master key because only one master key exists. The import must also be performed in one transaction here because several transactions could result in missing the error that the attribute for the master key has been set for two persons.

Use several transactions: Default setting for fast import.

Journaling: Journaling should be used if very large amounts of data are deleted or modified in one import. The changes or deletions for these entries are only to be made to the index after 4,096 entries (the figure is variable). This speeds up the import because the index does not have to be used for every single change/deletion. Instead, these changes are copied to the index after a maximum of 4,096 changes.

Update metrics: Metrics are supposed to be updated if the import significantly affects the number of object types or property types, that is, if a large number of objects or properties of a type are added to the semantic graph database. If the metrics were not updated, this could negatively affect the performance of searches in which the corresponding types play a role.

Trigger activated: You can use this checkmark to determine if the trigger is supposed to be activated or not during import. If you wish to apply one trigger but not another one, you have to define two different mappings with the corresponding semantic elements. For information on triggers, refer to the Trigger chapter.

Automatic name generation for nameless objects: Enables the automatic name generation for nameless objects.

 

If there is a table-oriented source, we can make the following settings:

Import entire table: Even though it can take longer to import the entire table at once, it makes sense to select this option if there are forward references, i.e. if relations are to be drawn between the objects to be imported. In this case, both objects must already be available, which is not the case if the table is imported one row at a time. Furthermore, the progress display is more precise than for importing one row at a time.

Import table row by row: A table should always be imported one row at time when the table contains no source reference since this procedure speeds up the import.

Separators within a cell: Refer to the chapter Mapping several values for an object type for an object.

 

If we have an XML-based data source, the following functions are available:

Incremental XML import: The XML import is performed step-by-step. These steps are specified by the partitioning element.

Import DTD: Imports the document type definition (DTD).

The functions in the “Log” tab allow changes that are made upon import to be tracked.

Place generated semantic elements in a folder: If new objects, types or properties are generated by the import, they can be placed in a folder in the semantic graph database.

Place changed semantic elements in a folder: All properties or objects with properties that were changed by the import can be placed in a folder.

Write error messages to a file: Errors can occur during import (for example, there may have been an identifying attribute for several objects, which is why the object could not be identified uniquely). These errors are displayed in a window following import by default, and the option of saving the error log is provided. If this is to occur automatically, then a checkmark can be placed in the box and a file can be specified here.

Last import / Last export: The date and time of the last import performed and the last export performed are displayed here.

The “Log” tab is also available in the case of the individual mapping objects. When necessary, a category can be entered for log entries here. Moreover, it is possible to define that the value of the corresponding object/corresponding property should be written into the error log. This is not activated by default, in order to avoid revealing sensitive data (e.g. passwords).

The function “Set registration key” can be found under the “Registry” tab, and can be used to register the data source for other imports and exports.

The function “Link existing source” allows a registered source to be used again.

“References” shows other places where a data source is being used:


One frequent job of attribute mapping is to import specific data from concrete objects, for example from persons: Telephone number, date of birth etc.

For the import of attributes for which i-views uses a specific format (e.g. date), the entries of the column to be imported must be provided in a form that is supported by i-views. For example, a string in the form abcde... cannot be imported to an attribute field of the date type; in this case, no value is imported for the corresponding object.

The following table lists the formats that i-views supports during the import of attributes. A table value yes or 1 is, for example, imported correctly as a Boolean attribute value (for a correspondingly defined attribute), while a value such as on or similar is not.

Attribute Supported value formats
Selection The mapping of import to attribute values can be configured with the “Value allocation” tab.
Boolean The mapping of import to attribute values can be configured with the “Value allocation” tab.
File It is possible to import files (e.g. images). For this to happen, either the absolute path to the file must be specified, or the files to be imported must be in the same directory (or a subdirectory that needs to be specified) as the import file.
Date
  • <day> <monthName> <year>, e. g. 5 April 1982, 5-APR-1982
  • <monthName> <day> <year>, e. g. April 5, 1982
  • <monthNumber> <day> <year>, e. g. 4/5/1982
The separator between <day>, <monthName> and <year> can be a space, a comma or a hyphen, for example (but other characters are also possible). Valid month names are:
  • ‘January’, ‘February’, ‘March’, ‘April’, ‘May’, ‘June’, ‘July’, ‘August’, ‘September’, ‘October’, ‘November’, ‘December’
  • 'Jan', 'Feb', ‘Mar', 'Apr', ‘May', 'Jun', 'Jul', 'Aug', 'Sep', ‘Oct', 'Nov', ‘Dec'.

Please note: Two-digit years are expanded to 20xy (so 4/5/82 becomes 4/5/2082).

If mapping is set to “Freely definable format”, the following tokens can be used: YYYY and YY (year), MM and M (month number), MMMM (name of month), MMM (abbreviated name of month), DD and D (day)

Date and time For date and time see the corresponding attributes. The date must come before the time. If the time is omitted, 0:00 is used.
Color Import not possible.
Fixed point figure Import possible.
Integer
  • Integers of any size
  • Floats (separated by a point), e.g. 1.82. The figures are rounded during import.
Internet link Any URL possible.
Time

<hour>: <minute>: <second> <am/pm>, e.g. 8:23 pm (becomes 20:23:00) <minute>, <second> and <am/pm> can be omitted.

If mapping is set to “Freely defined format” , the following tokens can be used: hh and  h (hour), mm and m (minute), ss and s (second), mmm (millisecond)

String Any string. No decoding is performed.

 

Boolean attributes and selection attributes

Selection or Boolean attributes can only assume values from a specified set; for selection attributes this is a specified list, and for Boolean attributes this is the value pair yes/no in the form of a clickable field. When importing these attributes, you can specify how the values from the import table are translated to attribute values of the semantic graph database. One option is to adopt the values as they are listed in the table; if they do not correspond to any possible attribute values defined in the semantic graph database, they are not imported. The other option is to specify value allocations between table values and attribute values, which are then imported.

The export of data from a semantic graph database int a table is prepared in the same editor and in the same way as the import.

  1. A new mapping is created in a table mapping folder in the main window.
  2. In the table mapping editor, the file to be generated is specified.

The difference to the import is that the columns are not imported from the table now but have to be created in the table mapping editor. Since the import and export editor are one and the same, you first have to select whether a new column to be created is a standard column or a virtual property. However, virtual properties cannot be used for export.

 

Exporting structured queries

It is possible to export the result of a structured query. This procedure makes sense if only certain objects that have been restricted by a search are supposed to be exported. Let’s assume, for example, we want to export all bands that have written songs that are more the 10 min long. To do this, we first have to define a structured query that collects the desired objects.

We then access this structured query from the configuration of the export. To do this, we select the mapping of a query rather than an object mapping in the mapping configuration header. The structured query can only be accessed with a registration key.

This has the effect that only the results of the structured query are exported. For these objects, we can now create properties that are to be included in the export: e.g. year the band was founded, members and songs. However, we might not want to export all of the songs of the bands we have thus compiled but only those songs that also match the search criterion, which is songs longer than 10 min in our example. To do this, we can assign identifiers to the individual search conditions in the structured query. These identifiers in turn can be addressed in the export definition.

 

Exporting collections of semantic objects

Collections of semantic objects can also be exported. These also need a registration key, which you can set under TECHNICAL -> Organizing folder.

 

Exporting the frame ID

The mapping of the frame ID enables us to export the ID of a semantic element assigned in the semantic graph database. To do this, we simply select the object, type or property for which we need the ID and then choose the “New mapping of Frame ID” button:

We can also decide if we want to output the ID in string format (ID123_456) or as a 64-bit integer.

 

Export via script

Finally, we have one additional powerful tool for the export: script mapping. For further information on this subject, refer to the “Script mapping” chapter.

 

Export actions for database exports

Mapping the properties of an object for an export into a database takes place exactly like mapping for an import and all other types of mapping. The only difference is that the export action has to be specified for the export. This specifies which type of query is to be executed in the database. Three export actions are available:

The following actions are available in the selection dialog that opens:

  • Create new data records in table: New data records are added to the database table. This action corresponds to an INSERT.
  • Update existing data records: The data records are identified via an ID in the table. They are only overwritten if the value has changed. If there is no suitable data record, a new one is added. This action corresponds to an UPDATE.
  • Overwrite table content during export: All data records are first deleted and then written again. This action corresponds to an DELETE on the entire table followed by an INSERT.

RDF is a standard format for semantic data models. We can use the RDF import and export to exchange data between the semantic graph database and other applications, and also to transport data from one i-views semantic network to another.

During an RDF export, the entire semantic network is exported into an RDF file. RDF import, in contrast, is interactive and selective. That is, we can specify at schema level as well as for individual objects and properties what is supposed to be imported and what not.

 

Reconciliation from RDF with the existing objects in the semantic network

If the RDF data originates from the same schema as the network into which it is imported, e.g. from a backup copy, the RDF import automatically assigns objects and object types by means of their ID. Just like for table and XML imports, we can use the import settings to determine, e.g. whether existing objects are to be updated by the import or if new ones are supposed to be created.

If the data originates from another source, the default setting of the import is into a separate subnet. We can also integrate this external information into our stock by means of manual assignments using the Map to function in the Mapping interface.

In addition to this, there are some global settings: Do we actually want to allow changes to the schema? Do we allow properties to be created multiple times? Finally, all schema changes are displayed on a separate tab.

The RDF export and import is suitable for restoring deleted individuals from a backup network. Proceed as follows to do so:

  1. Open the backup network in the Knowledge Builder
  2. Create a new folder and save the individuals to be restored to it. To do so, right-click to open the context menu in the list view of the individuals to be copied, and select “Copy content to new folder” while selecting the new folder as the destination.
  3. Open the RDF export on the newly created folder using the context menu
  4. Specify a file name in the export dialog, select the options “Use URLS (rdf:about)” and “Use frame URLs (krdfframe:)” and execute the export:



    Note: the option “Use KRDF” results in i-views additionally copying specific content that cannot be mapped in full by means of RDF syntax.
     
  5. Close the Knowledge Builder and open the target network in the Knowledge Builder
  6. Open the RDF import dialog in the main menu under Tools > RDF > RDF import:


     
  7. Select the file and press “Next”:


     
  8. Deactivate the option “Allow changes to the schema” in the selection dialog, and activate “Create folder with imported objects”:


     
  9. Execute import
  10. Check the restored individuals

 

The Admin tool can be used to transfer the entire schema of a semantic network from one semantic network to another via RDF export and import. However, if you only want to transfer selected types, you should consider using the “Copy schema to folder” function, which is available for all types via the context menu. This function creates a reference to the selected type together with all other (property) types that are required to create the selected type or objects of this type in the target network.

Once you have collected all required information in a folder, you can export this and import it into the target network in the same way as described in the previous chapter. However, the “Allow changes to schema” option should be deactivated in this case.

This attribute handles the checking of access rights and triggers:

  • Access rights regulate which operations on the semantic model may be executed be specific user groups. They are defined in the rights system in i-views. The rights system is located in the section Technical > Rights.
  • Triggers are automatic operations that are triggered on a certain event and execute the corresponding actions. The Trigger section is located under Technical > Trigger

The rights system and triggers are initially not activated in a newly created semantic graph database. These areas have to be activated before they can be used.

The procedure for creating rights and triggers is basically identical. Filters are required that check if certain conditions are met or not. If these conditions are met, the rights system grants or denies access, and a log entry is made or a script is executed for triggers. In the rights system, the arrangement of filters is referred to as rights tree while that for triggers is called trigger tree.

We use rights to regulate user access to the data in the semantic network. The two basic objectives enabled by the rights system are:

  • Protection of confidential data: Users or user groups may only see data that they are allowed to read. This ensures that secrecy and confidentiality restrictions are applied.
  • Work-specific overview: Certain users only need a section of the data of a model for their work with the system. The rights system enables them to display only those elements that they need in order to complete their tasks.

The i-views rights system is very flexible. It can be configured precisely for different requirements of a project. By defining rules in a rights tree, consisting of individual filters and deciders, a network-specific configuration of the rights system is created. There are many options for compiling these rules for the rights system, which generates even more differentiated rights. It is not possible to list all possible combinations of configurations; this requires consulting in individual cases.

How does the rights system work?

Access rights in the system are always checked when a user executes an operation on the data. The basic operations are:

  • Read: An element is supposed to be displayed.
  • Modify: An element is supposed to be changed.
  • Generate: A new element is supposed to be generated.
  • Delete: An element is supposed to be deleted.

If the access right is supposed to be changed in a certain access situation, the Rights tree is processed until a decision for or against access can be made in this situation. The Rights tree consists of conditions that are checked against the access situation. To check the conditions, filters are used which filter the elements of the semantic network and operations. Deciders are located at the end of a subtree of filters in the rights tree. These deciders either allow or prohibit access.

In relation to the access situation, aspects are selected which are used as the condition for allowing or prohibit access. In access situations, the following aspects are often used for the decision:

  • The operation (generate, read, delete or modify)
  • The element that is supposed to be accessed
  • The current user

It is possible that only one aspect of the access situation is selected as a condition but it is also possible to query a combination of the aspects listed. Example: "Paul [user] is not allowed to delete [operation] descriptions [element]”.

In a newly created semantic network the rights system is deactivated by default. Before it can be used, it has to be activated in the settings of the Knowledge Builder.

Instructions for activation of the rights system

  1. In the Knowledge Builder, call up the Settings menu and select the System tab. Select the Rights field there.
  2. Place a checkmark in the Rights system activated field.
  3. In the User type field, specify the object type whose objects are the users of the rights system. This is usually the “Person” object type. (Type must not be abstract.)
  4. Once you have connected the i-views knowledge portal, enter a user (object of the previously defined person object type) in the Standard web user field.

Before activation of the rights system, the folder is called Rights (deactivated). Once the rights system has been activated, the folder is called Rights. When the rights system is deactivated, checks of the access rights are no longer performed. However, the rules defined in the rights tree are retained and used again after renewed activation of the rights system.

Please note: If you access an element from the web front-end without special log-in, the person specified under Standard web user is used. It is common to create a fictitious person called “anonymous” or “guest” here.

To ensure the rights system also functions in the Knowledge Builder, the user accounts of the Knowledge Builder must be linked to an object from the semantic model. The user account can only be linked to objects of the type for which activation of the rights system was specified in the user type field.

The link is generally required for using the operation parameter User in query filters, or for using the access parameter User in structured queries when the rights system or the search is not executed in an application, but rather in the actual Knowledge Builder.

Instructions for linking Knowledge Builder users to objects of the person type

  1. Open the Settings menu in the Knowledge Builder and select the System tab. Select the field User there.
  2. Select the user who is to be linked. Link can be used to link the user to a person object that is not yet linked to a Knowledge Builder account.
    The Unlink function results in the Knowledge Builder account link to the person object is canceled.

Please note: The user currently logged in cannot be linked.

In general, users with administrator rights may perform all operations, regardless of which rights were defined in the rights system. The definition as administrator is also implemented in the Settings menu in the User field on the System tab.

Traversing the rights tree

The rights tree is comprised of rules that are defined in a tree. The branches of the tree, also referred to as a subtree, are comprised of the conditions that should be checked. The conditions are defined in the system as filters that are nested in each other. The system works through the tree from the top to bottom when the evaluation occurs. When a condition matches the access situation, then the check continues with the next filter in the subtree. This filter is, in turn, checked. This is implemented until the end of the subtree, when there is an access right or denial. If a condition does not match the access situation, then a switchover to the next subtree occurs. When the system encounters an access right or denial when working through the rights tree, the rights check ends with this result. The branches (subtrees) of the tree are therefore worked through successively, and the tree is “traversed“ until a decision can be made.

Filters and deciders are nested in each other in the form of folders, so that a tree construction is produced that is comprised of different subtrees. A folder can have several subfolders (several successor filters on one level), which produces branching in the rights tree. Folders that are defined on one level are worked through successively (from top to bottom).

Structure of the rights tree

When creating the rights tree, it is important to group the rules in a sensible way because once a decision as to whether access is allowed or denied has been made, no further rules are checked. Hence, exceptions should be defined ahead of global rules.

The two main cases that you have to distinguish are:

  • Negative configuration: Everything is allowed at the lowest subtree; denials are defined above it.
  • Positive configuration Everything is prohibited at the bottom, except for what is allowed above.

The order of the subtrees is therefore crucial when creating the rights tree. The order of the conditions in a subtree in contrast (whether we check the operation first and then the property or vice versa) can be chosen freely.

You don’t necessarily have to define all filter types to define a subtree of a rights tree. A subtree consists of at least one filter and one decider. An exception is the last subtree which generally consists of a decider only, which allows all remaining operations (which have not been prohibited in the rights tree beforehand) or which prohibits all remaining operations (which have not been allowed in the rights tree beforehand).

Example: rights tree

This basic example shows a rights tree consisting of a rights tree part and a default decider that allows everything:


In the rights branch, the deletion or modification of the attributes name, duration and publication date is prohibited. To do this, an operation filter is used that has the operations delete or modify as the condition. Only these operations are let through by the operation filter. The next filter is property filter that filters on certain properties. In this case, the attributes Name, Duration and Publication date are filtered irrespective of the object or property on which these are stored. The last node of the rights branch is the decider Deny, which prohibits all access operations that match the two preceding filters. If one of the two conditions does not apply to the access situation, the default decider Allow is executed. 

This simple rights tree would look as follows in i-views:

Checking an operation using the rights tree example:


The left side shows the operation to be checked: User Paul wants to delete the Description attribute. The rights tree is depicted on the right side. The check of the condition of the first filter returns a positive result because Paul wants to execute the operation Delete. In the rights tree, the next filter of the rights sub-tree is executed. This is the property filter of the attributes, Name, Duration and Publication date. The check of the filter returns a negative result because the Description is not one of the filtered properties. Processing of the subtree is terminated. The next subtree of the rights tree is processed next. This is already the default decider “Allow” which allows everything that is not explicitly prohibited in the rights tree.

Deciders are always at the last position of a rights sub-tree. The combination with filters is used to determine access situations in which access is explicitly allowed or denied. If a decider is reached while traversing the rights tree, the check of rights is answered with this decision. The operation to be checked is then either allowed or rejected. The rights tree is then not checked any further.

 Symbol Access rightDescription
 Grant accessAccess is granted in the access situation to be checked.
 Deny accessAccess is not granted in the access situation to be checked.

In general, there are two different deciders, a positive one called Grant access and a negative one called Deny access.

Instructions for creating a decider

  1. In the rights tree, choose the position at which you want to create the decider.
  2. Use the buttons  and  to create new deciders as subfolders of the currently selected folder.
  3. Assign a name to the folder.

To define rights, filters and deciders are combined in the rights tree. The Filters chapter explains the different filter types and how they can be used. The deciders Grant access or Deny access each represent the last node of the subtree of the decision tree. If the decider is reached, this decision terminates the traversing of the rights tree.

The following functions are available for defining rules in the rights system:

SymbolFunctionDescription
 New operation filterA new operation filter is generated.
 New query filterA new query filter is generated.
 New property filterA new property filter is generated.
 

New organizing folder

A new organizing folder is generated.
 

Grant access

A positive decider that grants access is generated.
 Deny accessA negative decider that denies access is generated.

Organizing folders can be used to structure rights in a meaningful way. They do not affect the traversing of the rights tree. Their only purpose is to group large numbers of rights into subtrees of the rights tree that have related content.

Changing the arrangement of folders in the rights tree

In order to sort the filters and deciders in the rights tree into the right order, right-clicking opens a context menu:

The filter or decider can be renamed, deleted and exported in this context menu, and its position in the rights tree can be changed. If two folders (filters or deciders) are on the same level, the Upward or Downward function can be used to shift the folder further to the front or the back in the rights tree. To the top and To the bottom shifts the folder to the first or last position of the level in the rights tree accordingly.

If folders are to be nested in each other, meaning the level in the decision tree be changed, this can be done using Drag&Drop.

Assembly of rights

Assembling filters and deciders in the rights tree creates a large number of possible combinations for defining rights. By principle, there are 3 different procedures for defining rights:

  • Definition of rights for every possible access situation
  • Positive configuration
  • Negative configuration

Because defining access rights for every possible access situation is a very complicated procedure, one of the two other means of configuration is generally used. They are explained in the following two sections.

If rights are defined in the rights tree which only allow specific accesses and deny all other accesses about which nothing is specified, then this is referred to as a positive configuration of the rights tree. Rules are defined in each subtree of the rights tree, which allow specific operations. All operations to be checked traverse the rights tree: If the operation to be checked does not match the conditions of the subtrees, it is rejected at the end of the rights tree.

Example: Positive configuration

This example shows how a positively formulated rights tree might look like in the Knowledge Builder:


The first rights subtree defines read access to the attributes name, duration and publication date. The read operation is allowed for these attributes. The second rights subtree allows new objects of the type song to be created. All other operations are generally denied at the end of the rights tree.

When rules are defined in a rights tree to reject specific operations and permit all the operations that, after a check, are identified as not matching those operations, this process is described as a negative configuration. Specific operations are prohibited in the subtrees of the rights tree. If one of the operations to be checked does not match the conditions of the subtrees, the operation is permitted at the end of the rights tree.

Example: Negative configuration

This example shows how a negatively formulated rights tree might look like in the Knowledge Builder:


Unlike with a positive configuration, for example, the first rights subtree rejects the access rights for deleting and modifying the Name, Length and Publication date attributes. The second rights subtree prohibits deletion of the relation that links the songs to the album they are contained in. All other operations may be executed.

Why do you need to define this right in i-views? On the one hand, you need an operation filter since this is about changing and deleting elements. On the other hand, the connection between the user and the element on which the user wants to execute an operation must be defined, which is only possible by means of query filters. 

Operation filter


In the operation filter, the operations Delete and Modify were selected.

Query filter


In the query filter, “Relation created by” is selected with relation target “Person.” On the relation target Person, the access parameter User was specified. The settings All parameters must apply and Search condition must be met are selected. In this case, the operation parameter “Primary semantic element” was selected.

A question relating to the schema is: On which elements is the relation was created by defined? There are different options for implementing this relation in a semantic network:

  1. Definition on objects and types: The relation is only used on objects and types.
  2. Definition on all elements: The relation is used on all objects, types, extensions, attributes and relations.

In the first case, it makes sense to use the operation parameter “Primary semantic element” or “Superordinate element.” If you define the right using the superordinate element, this does not apply only to the object itself but to all properties stored on the objects that were created by the user. If you use the operation parameter “Primary semantic element,” the right also applies to all meta properties of the object.
In the second case, the operation parameter “Accessed element” is used because only elements may be changed on which the relation was created occurs with the corresponding relation target, the user.

Compiling the right in the rights tree

There are two different variants for combining the filters. If there are no branches in the rights subtree, the order of the subtrees is not relevant. 


The graphic illustrates the two possible combinations: Version 1 (left) first operation filter, then query filter, version 2 (right) first query filter then operation filter, in both cases the decider “Allowed” then follows last.

Recommendation: It makes sense to have the operation filter in the first position, which makes it possible to create underneath it all other rights that filter on the same operation. This creates a more simple, traceable structure in the rights tree.

Advanced right: Elements that were not created by the user may not be changed or deleted

The right implies the denial for all elements that were not created by the user but we have not yet expressed this in the definition of rights. To do that, we have to take into account the Access denied decider during the creation of rights. If you look at both versions of rights and combine these with a negative decider, this results in the following variants. However, the two variants have different effects in the rights system.


If you add one decider Denied to each of the combination options presented above, the two versions are created: Version 1 (left) first operation filter, then query filter and decider “Allowed.” The operation filter is also followed by a decider Denied in a second subtree. Version 2 (right) first query filter then operation filter, and decider “Allowed.” In the version, the query filter is followed by a second subtree with the decider “Denied.”

Effects on the different versions on the rights system

Version 1 (left)

  • Allows modification and deletion of elements created by users themselves.
  • Prohibits modification and deletion of all other elements.
  • No statement is made in relation to all other operations.

Version 2 (right)

  • Allows modification and deletion of elements created by users themselves.
  • Prohibits all other operations on elements created by users themselves (e.g. read).
  • No statement is made in relation to all other elements.

The items show that version 2 does not express the requested access right. Only version 1 formulates the desired access right: All users can modify or delete elements they have created themselves and elements that were not created by the users may not be modified or deleted.

When the Rights folder is selected in the System area, the Saved test cases and Configure tabs are available in the main window. A number of operations can be configured in the Configure tab.

The configuration of custom operations is generally only used when the Knowledge Builder is used with other applications. A number of operations are application-specific operations that should be checked together. This is a matter of checking a chain of operations, and not just an operation.

Instructions for the configuration of custom operations

  1. In the Knowledge Builder, select the Rights folder in the System area.
  2. Select the Configuration tab in the main window.
  3. Click on Add to create a new operation.
  4. In the windows that follow, enter an internal name and a description for the new operation.
  5. The new operation is added as a user-defined operation.
  6. User-defined operations can be deleted again using Remove.

Triggers are automatic operations that are executed in i-views when a specific event occurs. They help support work flows by automating steps that always remain unchanged.

Examples for the use of triggers:

  • Sending emails due to a specific change
  • Editing of documents in a specific order by specific persons
  • Marking jobs as open or done on the basis of a specific condition
  • Creating objects and relations when a specific change is performed
  • Calculating values in a previously defined way
  • Automatically generating the name attribute for objects (e.g. combining properties of the object)

How do triggers work?

Triggers are closely related to the rights system. They use the same filter mechanisms in order to determine when a trigger is initiated. The filters are arranged in a tree, the trigger tree, which is structured like the rights tree. It consists of filters that are used to define conditions for the execution of a trigger action. If an access situation occurs because an operation is performed, and that access situation matches the defined conditions, the corresponding trigger action is executed.

Trigger actions are in most cases scripts that, depending on the elements of the access situation, use them to execute operations. This makes it possible to automate steps that remain unchanged or perform intelligent evaluations on the basis of specific constellations in the semantic network. Scripts can be used to execute any operations on elements that are dependent on complex evaluations, and thereby ensure situation and application-specific requirements for the semantic network. Most triggers are therefore usually project and network-specific; a consultation should be performed for each individual case.

In order to be able to work with triggers, the trigger functionality must first be activated in the Knowledge Builder.

Instructions for the activation of triggers

  1. Call up the Settings for the Knowledge Builder.
  2. Select the System tab there, and the Trigger field.
  3. Place a checkmark in the Trigger activated field.

A Limit for recursive triggers can be specified here. The default setting is “None”. Triggers that call themselves are referred to as recursive triggers. This occurs when even operations in the semantic network are implemented in the trigger script that, in turn, themselves match the filter definition of the trigger.

Before activation of the trigger functionality, the Trigger folder in the technical area of i-views is called Trigger (deactivated). The folder is renamed Triggers by the activation.

Note: If the current user is used in triggers (e.g. in query filters or using the corresponding script function) and the user does not execute operations in an application, but rather in the actual Knowledge Builder, then the Knowledge Builder user account must be linked to a person object. The chapter Activation of the rights system explains how a link like this is created.

The trigger tree has the same structure as the rights tree. It is comprised of branches (subtrees), which are comprised of filters and triggers. The filters are the conditions that must be checked for the trigger to be able to be executed at the end of the subtree when all conditions to be checked beforehand have been satisfied.

The trigger tree is queried for the data when each operation is performed – the tree is “traversed”. If a subtree applies to the access situation, then the trigger is executed. If the condition of a filter does not apply to the access situation, then a switchover to the next subtree occurs. Once the trigger action has been executed, traversal of the trigger tree continues, in contrast to the rights system, which stops being worked through when an decider is reached. In order to define that no other filters should be checked in the trigger tree after execution of an action, the Trigger no other triggers button is used:

SymbolFunctionDescription
Trigger no other triggersThe traversal of the trigger tree is ended.

At the end of a subtree, no decider is available, in contrast to the rights system, but rather actions are available.

Symbol

Function

Description

Define trigger

A new trigger action is created.

The available trigger actions are:

  • Enter log: A log entry is written.
  • Execute script > JavaScript: A script file in JavaScript is executed.
  • Execute script > KScript: A script file in KScript is executed.

Structure the trigger tree

The order in which you define the triggers when designing the trigger tree usually has no effect on the performance of i-views. There are design recommendation for the rights tree, but these cannot be applied to the trigger tree, as the trigger tree is further traversed after a trigger action has been executed.

To provide a clearer structure for triggers, they can be collected in organizing folders. The organizing folders themselves do not affect the traversing of the trigger tree.

Symbol

Function

Description
Organizing folder Organizing folder for grouping subtrees

Example: trigger tree

This example shows a trigger tree that combines the names of persons and concerts automatically from properties of the objects:

This simple trigger tree begins with an operation filter and splits into two separate subtrees after the operation filter. If either the modify or the create operation is executed, it is let through by the operation filter. The persons subtree filters operations that are performed on attributes and relations of person type objects. If the operation affects either the first name attribute or the last name attribute, it is let through by the property filter. The corresponding script that compiles the name attribute of a person from their first and last name is executed. The second subtree also applies to the modify or create operation filter. However, it filters attributes and relations that are saved in concert type objects. The property filter only lets operations through if they are performed on the attributes or relations of the date, the event location or the artist. If these conditions apply, the corresponding script that compiles the name of the concert is executed.

This is what this trigger tree would look like in i-views:

As described in the Trigger tree section, triggers consist of filters and trigger actions. These are combined in such a way that a specific trigger action is executed only when it is required.

The following functions are available in the trigger area:

SymbolFunctionDescription
New operation filterA new operation filter is generated.
New query filterA new query filter is generated.
New property filterA new property filter is generated.
New delete filterA new delete filter is generated.
New organizing folderA new organizing folder is generated.
New triggerA new trigger action is created.
Trigger no other triggersA new “Stop” folder is created. It ends the traversing of the trigger tree.

When creating triggers, you should consider two fundamental properties of the trigger mechanism:

  • Execution of a trigger script can cause further triggers to be triggered. This occurs if operations in the semantic graph database are executed in the trigger script itself.
  • After a trigger action has been executed, traversal of the trigger tree continues. All trigger actions of the subtrees that apply to the access situation are executed.

Trigger actions are used to perform intelligent operations in the semantic graph database, which, for example, automate or support work flows. However, they are only executed when the access situation and the links in the semantic network assume a specific state defined by the filter.

Instructions for the creation of trigger actions

  1. Select the position in the trigger tree at which the trigger action is to be created.
  2. Used the button  to create a new trigger.
  3. Select the action type from the list: Enter the log or execute the script (if you wish to execute a script, select the script language).
  4. The trigger is created as a subfolder of the currently selected folder.

An operation parameter must be output for the script to be executed. In contrast to query filters, only one operation parameter can be specified. Execution of the script starts on the element contained in the operation parameter.

Time/type of execution

  • Before the change: The trigger is executed before the operation is performed.
  • After the change: The trigger is executed immediately after the operation has been performed.
  • End of transaction: The trigger is executed only at the end of the shared transaction.
  • Job-Client: The Job-Client determines the time of execution.

Please note: Triggers that are executed for delete operations should preferably use before the change as their time, as the element to be deleted will no longer be available otherwise. For other operations, a more suitable time is after the change or end of transaction, as it is then possible, for example, to add a property to the newly created element or automatically generate the name from various properties of an object if one or more properties were changed.
The import chooses the order in which the properties will be imported in i-views. Therefore a trigger that is initiated during the import should not rely on the properties being available in full.

Execute once only per operation parameter

If this setting is selected, the element selected in operation parameter is executed no more than once per transaction. If this setting is chosen, the time of execution should be set to end of transaction so that the final state of the element is used in the script.

Example: For persons, the name of the object is meant to consist of the first name and last name. With this setting, the trigger is executed only once if the first and last names are changed at the same time.

Execution does not initiate trigger

This setting specifies that the operations executed within a trigger cannot initiate any further triggers. This setting can be used to avoid endless loops.

Continue to execute script in case of script errors

If this setting is active, an attempt is made to restart after an execution error and continue with the execution of the script. This setting is predominantly useful for scripts that are supposed to execute instructions that are independent of each other, and not for scripts that build on previous steps of the script. 

Abort transaction if trigger fails

This setting defines the termination behavior in the event of script errors. If an error occurs while the script is being executed and this setting is active, all actions of the transaction are reversed. If this setting is not active, all actions are executed apart from the ones affected by the error. The original action that led to the trigger being called is nevertheless written to the knowledge network.

Execution during data refactoring

The term data refactoring describes operations for restructuring the semantic network, e.g. Change type or Choose new relation target. Data refactoring operations can, in some circumstances, initiate unwanted trigger actions and, in some cases, even generate errors during execution of the script. For this reason, it is possible to set for each trigger whether it is to be executed during data refactoring.

The function body for script triggers is created automatically.

The script has three parameters:

parameter $k.SemanticElement / $k.Folder The selected parameter
access object Object with data of the change (new attribute value etc.)
user $k.User User who triggered the change

The following example sets the attributes with the internal name “changedOn" / “changedBy.” “Primary semantic core object"  should be selected as the parameter here.

/**
 * Perform the trigger
 * @param parameter The chosen parameter, usually a semantic element
 * @param {object} access Object that contains all parameters of the access
 * @param {$k.User} user User that triggered the access
**/

function trigger(parameter, access, user)
{
	parameter.setAttributeValue("geaendertAm", new Date());
	var userName = $k.user().name();
	if (userName)
		parameter.setAttributeValue("geaendertVon", userName);
	else
		parameter.attributes("geaendertVon").forEach(function(old) { old.remove });
}

The parameter "access" may contain the following properties (varies in each operation):

Property Description
accessedObject Accessed element
core Core object
detail Detail
inversePrimaryCoreTopic Primary relation target
inverseRelation Inverse relation
inverseTopic Relation target
operationSymbol “read," "deleteRelation" etc.
primaryCoreTopic Primary semantic core object
primaryProperty Primary property
primaryTopic Primary semantic element
property Property
topic Superordinate element
user User (identical to “user” parameter of the function)

 

If the user would like to monitor or document the trigger functionality for when which trigger was triggered and which operators were executed in the semantic network, log triggers are suitable. The log is written to the respective log file (bridge.log, batchtool.log etc.) in the application environment that the operation that triggered the trigger is performed in.

Log entry linesState of the sem. network at the time
#prebefore triggering
#postafter triggering
#endat the end of the transaction
#commitwhen the transaction is successfully ended

Log entries are used to retrace whether a trigger was executed in a specific access situation that actually occurred, and what it did. In contrast to this, a test can be performed in the test environment to determine whether a trigger would be triggered or not in a specific access situation, without the specific access situation being performed.

Instructions for the creation of log triggers

  1. Select the trigger script that is to be logged in the trigger tree.
  2. Using the button to create a trigger of type Enter log in the trigger tree directly in front of the script trigger.

Example:

Log entry that documents the change of the attribute e-mail using a trigger.

If you want to monitor the activities that users perform on objects, you should set up a changeLog trigger, also referred to as a change history.

For this purpose, you must first define a string attribute with the internal name “changeLog.” This changeLog attribute must be defined for all elements for which it is to document user activities.

 

Click “Open” to open the table showing who made the change, when they did so, what the change is, to which semantic element it applies, and which value was used.

The trigger must contain the operation filters that will log the change history, and the elements where the attribute is to be visible.

The trigger script looks like this:

/**

 * Perform the trigger

 * @param parameter The chosen parameter, usually a semantic element

 * @param {object} access Object that contains all parameters of the access

 * @param {$k.User} user User that triggered the access

**/


function trigger(parameter, access, user) {

     $k.History.addToChangeLog(access,parameter);

}

Example

A change log is to be saved in all objects in a semantic network. The aim is to log the modification, creation and deletion of properties in the objects. First, an operation filter is created that reacts to the operations “Delete attribute”, “Modify attribute value”, “Create relation”, “Create relation half” and “Delete relation half”.

In the next step, a query filter was defined to determine the semantic elements on which operations are performed.

The “Superordinate element” operation parameter was added to the trigger script, because it corresponds to the query filter.

The trigger rules (operation filter, query filter and trigger script) are located in the hierarchy tree as follows due to their checking sequence:

With the aid of filters, the conditions are defined in the rights tree or in the trigger tree to allow access situations to be restricted when a decider or trigger should be executed. New filters are created under the node currently selected in the tree. This way, they are nested in each other.

The three filter types operation filter, query filter and property filter are available in the rights system. In addition to the three basic filter types, the trigger area provides a specific filter – the deletion filter.

There are different types of filters – when do we use which filter?

SymbolFilterDescription
Operation filterFilters the operations; selection from list
Query filterFilters elements by means of structured query
Property filterFilters relations and attributes; selection from list
Delete filterFilters the deletion of elements

Operations can only be determined using an operation filter. Users can only be determined using a query filter. Properties can be determined using either query filters or property filters. The use of property filters makes sense when properties should be filtered regardless of other properties in the semantic model such as relations to the user. Above all, when large sets of properties are to be filtered, it is more straightforward and clearer to do so in a list instead of in a structured query. If relations to the accessed element or to the user are to be factored in, then a query filter must be used.

Instructions for creating a filter

  1. In the rights or trigger tree, choose the position at which you want to create a new filter.
  2. Use the buttons  ,  , or to create a new filter.
  3. The filter is created in the tree as a subfolder of the currently selected folder.
  4. Assign a name to the folder.

To specify the operations for which an access right should apply or a trigger should be executed, operation filters are required. By selecting the required operation it is possible to add it to or remove it from the filter.

The operations are divided into groups. When you select the higher-level node of a group, all lower-level operations are included in the filter. If, for example, you choose the Create operation, the filter considers the operations Create attribute, Create extension, Create folder, Create relation, Create relation half, Create type and Create translation.

The Operations chapter lists all available operations and also specifies which operation parameters can be used in combination. The various operation parameters are explained accordingly in the Operation parameters chapter.

You can use property filters to filter attributes and relations. There are two different procedures for using a property filter:

  • Restriction on properties: Specify the properties to which the condition is supposed to apply. Subsequent filters or deciders of the subtree are only executed if the access property matches the selected property.
  • Exclude the following properties: Specify the properties to which the condition is not supposed to apply. If the access property matches one of the selected properties, subsequent filters, deciders or triggers are not executed.

You can use Add and Remove to select the properties listed below. All properties below can be selected using All. None removes all selected properties. You can use the Edit field to call up the Detail editor of the attribute or relation that is selected in the top selection field. The tabs All properties, Generic properties, Attribute, Relation, View configuration and Semantic network are designed to help users find the filtering properties more quickly. The Semantic network tab shows all relations and attributes that the user has created.

Query filters make it possible to include elements in the environment of the element that is to be accessed. This allows not only individual properties, but also relationships between objects, properties and attributes to be included in the rights or trigger definition. When using query filters, it is necessary to specify an operation parameter to which the result of the structured query is compared. All available operation parameters are explained in the Operation parameters chapter.

There are two ways to define query filters:

  • Search condition must be met: This setting is selected initially. If the search result of the structured query matches the operation parameter, the condition of the filter is met and subsequent filters, deciders or triggers are executed.
  • Search condition must not be met: If the structured query returns the same element as the access parameter as its result, the condition is not met and the check of the rights or trigger tree switches to the next subtree. If the result of the structured query differs from the result of the access parameter, the condition is met and the subsequent filter, decider or trigger is executed.

The objects of the type at the top left that match the search condition are the result of the structured query. These are compared to the element that is transferred by the operation parameter. It is possible to use access parameters in the structured query. They can be used, for example, to include the user, accessed element etc. in the query.

During selection of the operation parameter it is possible to configure whether

  • all selected parameters must apply (All parameters must apply)
  • or only one parameter must apply (One parameter must apply).

Please note: Initially, the setting All parameters must apply is selected. If, for example, the operation parameters Accessed element and Primary semantic element are selected, the condition is met only if the result of the structured query is both the accessed element and the primary semantic element of the operation to be checked.

Example 1: Query filter in the rights system

A right should be defined that determines that already published songs may be viewed by everyone; unpublished songs, in contrast, may not.

In this example, the user Paul would like to read song X. This operation is now checked by the rights system. A query filter has been defined in the rights system which checks whether the song has already been published. The structured query of the query filter searches of objects of the “Song” type, with the restriction that the attribute “Publication date” is in the past. The structured query delivers all songs that meet this condition. If song X is one of them, then the check by the filter returns a positive result and the folder that follows the query filter (with a filter or decider) is executed.    

In the case of the query filter, the search condition settings must be met, and “All parameters must apply” must be selected.

Example 2: Query filter in the rights system

In most cases, there is a connection between the user who wants access and the objects and properties that the user wants to access. An example of this would be: “Employees of a department who look after a branch may edit all customers of this branch.” Another version of this example that is illustrated below would be: “Users who maintain an artist may edit and delete this artist.”

The left side shows a section of the semantic network: The object Paul is linked to the objects Artist A, Artist B and Artist C via the relation Maintains. The inverse relation of “maintains” is “maintained by,” which exists between the objects Artist A, Artist B and Artist C and the object Paul, and is queried in the query filter. This relation in the semantic network represents that one person is responsible for data maintenance relating to an artist.


In this example, user Paul wants to delete the object Artist A. The corresponding query filter delivers all artists that were maintained by a certain user as the query result. The current user is transferred to the structured query as an access parameter. The “Structured query” chapter explains access parameters in structured queries. Hence the search in this access situation returns all artists that were maintained by Paul. Since Artist A is one of them, the query filter check returns a positive result.

In this example, the access situation adds two aspects to the query filter: the artist to be deleted and the user. The query filter can thus be defined in two different ways. The artist is either transferred to the query filter as an accessed element and the user is used as the access parameter in the structured query. Or the user is transferred to the query filter as the operation parameter “User” and the company is used as the access parameter “Accessed element” in the structured query.

Delete filters are only available for defining triggers. They are used for testing in a deletion situation whether the higher-level element is also affected by the delete operation. For example, you want a trigger to not be executed if an object including all its properties is deleted but a deletion filter must be used if a certain property of the object is deleted.


When defining a delete filter, at least one operation parameter must be specified which determines which deletion of an object is to be tested.

  • All parameters must apply: All specified operation parameters must apply. For example, if two operation parameters are specified (accessed element and primary element), then it is checked whether the delete operation applies to both the accessed element and the primary element. This can only be the case if the primary element is also the accessed element.
  • One parameter must apply: Only one of the specified operation parameters has to apply.

Note: In most cases, the operation parameter offers a superordinate element or primary object because a check is to be performed as to whether only the property is deleted or if the property is deleted because the entire object has been deleted.

  • Not affected by the delete operation: The condition of the filter is positive if the element transferred in the operation parameter is not deleted in this transaction. 
  • Affected by the delete operation: The condition of the filter is thus positive if the element transferred in the operation parameter is deleted in this transaction.

Example: Delete filters in triggers

In this example, a trigger is only to be executed if the artist, location or date of an event is modified or deleted, but not if the object containing the properties is deleted. The setting Not affected by the delete operation is used for this purpose. If the delete operation affects the superordinate accessed element, which in this case is the concert object itself, then the checking of the subtree is aborted because the filter has returned a negative result.


The superordinate element operation parameter is used along with the Not affected by the delete operation setting.

 


In this example access situation, the Date attribute with the value “19.10.” in the “Concert X” object is deleted. The object itself is not deleted. The “Concert” query filter, which is defined by the “Superordinate accessed element” operation parameter, and the “Artist, location and date” property filter receive a positive response. The subsequent delete filter also returns a positive response, as the object containing the property (superordinate accessed element) is not affected by the delete operation – in line with the “Not affected by the delete operation” setting of the delete filter.

 


In this access situation the “Concert X” object is deleted by user Paul. Deleting the object automatically deletes all properties of the object – and thus all attributes of the object as well. The check of the trigger tree is executed for the deletion of both the object and the attribute. The “Concert” query filter and the “Artist, location and date” property filter are fulfilled for the delete process of the attribute in the check of the trigger tree. The delete filter itself is not fulfilled in this situation, as the “Concert X” object containing the “Date 19.10.” property is deleted.

Use of delete filters makes sense, for example, if the trigger script compiles the name of the object from its properties. As a result, the name of the object is not modified several times when the properties of the object are deleted; instead, the object and all related properties are deleted without the script for compiling the name being executed. This usually saves unnecessary calculation times and can make sense in specific application scenarios, e.g. if the trigger sends an email notification that an object has been renamed (and this avoids sending numerous superfluous emails regarding the name change).

Operation parameters control the element to which the result of the structured query for the condition check should be compared in query filters. In the simplest case, the result is compared to the element that is to be used to execute the operation to be checked. Operation parameters can be used to modify the transferred element. You can choose the current user or elements from the element environment that will be used as the comparison element for the query filter.

They are also used, among other things, in delete filters and script triggers. Based on the element to which access is executed, they specify there the element on which the script is to be executed, or on which the deletion of elements (and which elements) is to be filtered.

When is this useful? It can be essential if you cannot use an element from the environment of the affected object instead of the object itself for comparison: when, for example, you want to check access rights for creating new objects or types. It is not possible to define a structured query that returns the object that has not been created yet. In this case, the query filter must be compared to something else, i.e. the type of object to be created and, in case of object types, to the super-type of the type to be created.

Operation parameter Description
(Super) type

In the case of types, the (super) type is the super-type of the type. In the case of objects, the (super) type is the type of the object type. In the case of attributes or relations, the (super) type is the type of the property.

User The user is the object of the users which executes the operation.
Property The property is the property that the operation affects (attribute or relation). If the operation is performed on an object, type or extension, the operation parameter property is blank.
Inverse relation If the property affected by the operation is a relation, the parameter contains the inverse relation half. 
Inverse relation type

The inverse relation type is the type of the inverse relation. This can be used for the generation of relations.

Core object

If the higher-level element is an extension, then the core object is the object on which the extension is stored. Otherwise, the core object is identical to the accessed element.

Folder The Folder operation parameter is the folder affected by the operation.
Primary property

In the case of meta properties, the primary property is the property closest to the object, type or extension. Otherwise, the primary property is identical to property. 

Primary semantic core object

If the primary semantic element is an extension, then the primary semantic core object is the core object of the extension. Otherwise, the primary semantic core object is identical to the core object.

Primary relation target The primary relation target is the primary semantic element of the relation target.
Primary semantic element

If the superordinate accessed element is a property, the primary semantic element is the object, the type or the extension on which the property is stored (transitive). Otherwise, the primary semantic element is identical to superordinate element.

Relation target If the property affected by the operation is a relation, the Relation target parameter contains the relation target of the relation half. (The source of the relation would be the higher-level element in this case.)
Superordinate element The semantic element is the object, the type or the extension affected by the operation. In the case of properties, the semantic element is the object, the type or the extension on which the property is saved.
Accessed element The accessed element is the element affected by the operation.

The accessed element is the element of the semantic network that is currently being accessed. For query filters in the rights system, for example, the accessed element is the element that is to be accessed by an operation. When checking an access situation, the element is then transferred to the query filter on which the operation is supposed to be executed. The query filter then compares the accessed element to the result of the structured query.

The “User” parameter is always the user object of the user who is currently logged in, regardless of the accessed element. For this purpose, the Knowledge Builder account must be linked to a semantic network object. The chapter on activation of the rights system describes how this link is created.

Accessed element User
Object, type, extension or property Object of the user who is currently logged in

The “(super) type” parameter is used, for example, if operations that create new elements are to be checked in the rights system. When elements are created, the query filter cannot be defined so that it finds elements that have not been created yet. The query filter must work on the super-type or type of the element to be created. During the creation of objects, attributes and relations, the type of the objects, attribute or relation is used. For types, the super-type of the type to be displayed is used.

Accessed element (Super) type
Object or extension The type of object or extension
Type The super-type
Property The type of property

The semantic element is used if the direct properties of an element are to be retrieved.

Accessed element Superordinate element
Object, type or extension The actual accessed element
Property Object, type or extension on which the property is stored
Meta-property Property on which the meta-property is stored

Attributes and relations are understood to be properties. The operation parameter contains the attribute or the relation on which the operation is performed. If the operation is performed on an object or type, the operation parameter property is blank.

Accessed element Property

Attribute or relation

The actual accessed element
Object, type or extension Blank

The inverse relation is the “opposing direction” of a relation half. If the relation half is considered as directed graphs, then there is a relation between two opposing graphs (the “forward direction” and the “reverse direction” of the relation) that is attached between two elements. The inverse relation is therefore the opposing relation half. The inverse relation has the relation source of the relation half as its relation target and vice-versa.

Accessed element Inverse relation
Relation half The inverse relation half
Object, type, extension or attribute Blank

The inverse relation type is the type of the inverse relation.

Accessed element Inverse relation type
Relation half Type of inverse relation half
Object, type, extension or attribute Blank

The relation target is not the source, but rather the “target” of a relation half. It can also be considered the inverse relation half.

Accessed element Relation target
Relation half The relation target is the relation source of the inverse relation
Object, type, extension or attribute Blank

The primary semantic element always delivers an object, type or extension. If the primary semantic element is executed on meta properties, the properties are processed transitively until the object, type or extension to which the properties are appended is found.

Accessed element Primary semantic element
Object, type or extension The actual accessed element
Property Object, type or extension on which the property is stored
Meta-property Object, type or extension on which the property is stored on which in turn the meta-property is stored (transitive)

In contrast to the primary semantic element of a relation half, the primary relation target is not the object, type or extension on which the relation half is located but the object, type or extension to which the inverse half of the relation is connected.

Accessed element Primary relation target
Relation half The primary semantic element of the relation target (object, type or extension on which the inverse relation half is stored)
Relation half whose relation target is a property or meta-property The primary semantic element of the relation target (object, type or extension of the meta-property or property on which the inverse relation half is stored)
Object, type, extension or attribute Blank

The core object is used when work is done with extensions. Instead of the extension, the core object delivers the object to which the extension is saved.

Accessed element Core object
Object, type or property The actual accessed element
Extension The object to which the extension is saved

If you want the corresponding object or type to be processes for an element, you must use the primary semantic core object. In contrast to the primary semantic element, no extensions are permitted. In case of extensions, the core object is output. 

Accessed element Primary semantic core object
Extension The object to which the extension is saved
Object or type The actual accessed element

Property or meta-property of an extension

The object to which the extension is saved
Property or meta-property of an object or type Primary semantic element – object or type to which the property is saved (transitive)

The primary property is always a property. It resembles the primary semantic element in that it transitively processes meta properties. However, it delivers the last property that precedes the primary semantic element, that is, the property stored directly on the primary semantic element.

Accessed element Primary property
Property The actual accessed element
Meta-property (or meta-property of a meta-property) The property that is closest to the object, type or extension
Object, type or extension Blank

If a folder from the Folder area of the semantic network is to be transferred to the search as a parameter, the Folder operation parameter must be used.

Accessed element Folder
Folder The actual accessed element
Object, type, extension or property Blank

Example 1: Accessed element and property in the rights system

The example below shows the access situation on the left side and the corresponding query filter on the right side.

Access situation: User Paul wants to change the attribute Duration of song X.

Query filter: All attributes created by a certain user are filtered. In the structured query, the access parameter “User” is used, which restricts the objects of user to the person who wants to execute the operation. This corresponds to all attributes that were created by Paul.

Checking the access rights: To check the access rights, the attribute (accessed element/property) on which the operation is to be executed is transferred to the query filter. If this attribute is included in the set of search results, the query filter check returns a positive result. 

Operation parameter: The attribute Duration is transferred to the query filter. In this case, both the operation parameter “Accessed element” and the property can be used because the attribute “Duration” is actually a property and represents the accessed element of the operation.

Example 2: Superordinate element and primary semantic element in the rights system

This example shows the access situation on the left side and the corresponding query filter on the right side.

Access situation: User Paul changes the Length attribute, which currently has the value 02:30 and is part of the Song X object.

Query filter: The query filter is defined in such a way that it searches for all objects that were created by a specific user; the currently logged-in user is the accessed element. Accordingly, the query filter finds all the objects created by Paul.

Checking the access rights: If the result set of the query filter contains Song X, the following folder (filter or decider) is executed.

Operation parameter: Use of the “Superordinate element” operation parameter has the effect that, instead of the “Length” attribute to be changed being transferred to the query filter, the object in which it was defined is transferred to the query filter. This is the case for Song X. In addition to the superordinate element it would also be possible to use the “Primary semantic element” operation parameter in this case. The “Superordinate element” operation parameter would have the result that all properties and the object itself are rated positive by the filter. In addition, the “Primary semantic element” operation parameter would also permit meta properties of the object, no matter how many properties are between the object and the meta property.

Example 3: (Super) type in the rights system

The example shows the access situation on the left-hand side and the query filter applied in this situation on the right-hand side.

Access situation: User Paul wants to create the attribute Length on the object Song X. The value is to be 02:30.

Query filter: The query filter returns the attribute type “Length.”

Checking the access rights: If the attribute to be created has the “Length” type, the check of the query filter returns a positive result. 

Operation parameters: When creating elements, it is not possible to define a query filter that returns the element to be created and is thereby able to check the access rights. This means that a different operation parameter must be chosen as the accessed element when creating elements. The “(super) type” operation parameter is suitable in these situations. In this example, the attribute type is used, which is the Length attribute type.

Operation filters can be used to specify operations that are then permitted in the filter process of operation filters. If a different operation is executed in the access situation than specified in the operation filter, the system switches to the next subtree when traversing the rights or trigger tree.

The general operations Create, Read, Modify and Delete consist of multiple individual operations. If one operation group is prohibited, that means that all the operations it contains are also not permitted; vice versa, if an operation group is permitted, all the operations it contains are automatically permitted as well.

The table shows an overview of all available operations that can be applied in operation filters. Depending on the operation, only specific operation parameters can be used in query filters. These are specified in the “Operation parameters” column. 

Note: Derived operation parameters such as primary semantic elements or primary semantic core objects, for example, can be used whenever the parameter from which they are derived can be used.

Special features of triggers
No read operations can be used for triggers. In addition, the operation groups Query (operation: use in structured query), Display of objects (operation: Display in graph editor) and Edit (operation: Validate attribute value are not available for triggers.

In addition, the “Accessed element” operation parameter is available for triggers in the “Create” operations if the time/type of execution is set to After the change or End of transaction.

Operation group

Operation Operation parameter
Query Use in structured query Accessed element
Display of objects Display in graph editor Accessed element
Edit Validate attribute value Accessed element, property, superordinate element, (parameter to be checked: attribute value)
User-defined operation    
Generate Generate attribute (Super) type, superordinate element
Generate extension (Super) type, superordinate element, core object
Generate object (Super) type
Generate folder Folder
Generate relation (Super) type, superordinate element, relation target, inverse relation type
Generate relation half (Super) type, superordinate element, relation target
Generate type (Super) type
Add translation Accessed element, property, superordinate element
Read Read all objects/properties of a type (Super) type
Read attribute Accessed element, property, superordinate element
Read object Accessed element, superordinate element
Read relation Accessed element, superordinate element, property, inverse relation, relation target, inverse relation target
Read type Accessed element, superordinate element
Delete Delete attribute Accessed element, superordinate element
Delete extension Accessed element, property, superordinate element
Delete object Accessed element, superordinate element
Delete folder Folder
Delete relation half Accessed element, inverse relation, property, superordinate element, relation target, inverse relation target
Delete type Accessed element, superordinate element
Remove translation Accessed element, property, superordinate element
Modify
 
 
 
Modify attribute value Accessed element, property, superordinate element
Modify folder

Folder

Modify schema Accessed element, superordinate element
Change type Accessed element, superordinate element
Use tools Export  
Import  
Edit/execute script  

Read object
The operation Read object is used to display objects for the corresponding object type on the Objects tab. The operation does not prevent the display of the object when it is called up using a linked object. In this case, the operations for properties Read attribute and Read relation then apply.

Read all objects/properties of a type
This operation specifically controls the access rights check when processing a structured query. A structured query checks all intermediate results by default. A search for all employees with a wage greater than €10,000 would therefore not result in any hits when the wage cannot be read, even if the corresponding employee objects could be read. This response is often preferred, however is seldom performant. In the case of an extensively configured rights system in particular, processing of which requires a lot of processor capacity, we recommend using a control that does not require intermediate results of a structured query to be checked because a check of the final results is sufficient. In most semantic networks, permission can be issued for all property types (“top-level type for properties”).

To examine which intermediate results are checked, this information can be made to appear in a structured query. This is done using “Settings->Personal->Structured query->Show access rights checks”.

Use in structured query (obsolete) 
If a negative access right has been defined for an element that is filtered for the operation Use in structured query, then the element may not be used in a structured query. It will not be factored into structured queries even when the (abstract) super-type is specified.  

Validate attribute value
The operation Validate attribute value is used when the attribute value to be set must satisfy certain conditions. The definition of the condition for the attribute value is made in a structured query. Two possible definitions are available there for validation of the attribute value:

  • Condition for the attribute value to be set:
    The new value of the attribute can be validated by a comparison with a specified value in the structured query.

    Example: The attribute value may only be less or equal to 4.0.
     
  • Compare with the attribute value to be set:
    This compares the current value with the new value.

    Example: The new value of the attribute age may only be greater in this case. Smaller values are not permitted.
  • Compare the value to be set with the result of a script:
    This initially determines a comparative value by means of a script.

    The script is called using a parameter object that contains the following properties:
  • Property Value
    attributeValue Value to be set
    property Property to be changed (attribute)
    topic Element of the property
    user User who sets the value

Different comparative operators are available for the validation, which can be used to check the attribute value to be set with another value.
If the new value does not match the defined condition, the filter check produces a negative result when the initial setting Search condition must be satisfied has been selected.

Modify schema
The modify schema operation concerns changes to the definition area of relations and changes to the type hierarchy (is a subtype of and is a super-type of relations).

This example shows how groups of operations (read, generate, modify, delete) can be used sensibly when defining rights. All operations are to be prohibited for the Song type and its objects. This includes the following actions:

  • Deletion of the object type Song
  • Deletion of specific songs (objects of Songs)
  • Deletion of attributes that occur on a Song
  • Deletion of relations that occur on a Song (relation target and source)
  • Deletion of extensions that extend objects of Song
  • Deletion of attribute and relation types that have objects or subtypes of Song as their definition area

For example, if all delete operations for an object and the corresponding type are to be prohibited, you have to ensure you cover all delete operations by means of the corresponding parameters when selecting the operation parameters in the query filter of the right:


The only condition of the query filter used is the object type Song, for which the setting Objects and Subtypes is selected. The operation parameter “Accessed element” covers the object type “Song” and all objects that belong to this type. The parameter Core object covers the extension objects that belong to songs. Attributes and relations are covered by the operation parameter “Superordinate element.”

In the rights tree, the operational filter for the delete operation comes first. This is followed by the query filter depicted below and finally the decider “Access refused.”


Query filters used in the example: “Core object,” “Superordinate element” and “Accessed element” have been selected as operation parameters. The settings used are “One parameter must apply” and “Search condition must be met.”

Extension of the right with attribute and relation types

A thus defined right covers all but one of the above described requirements on the right. Only the deletion of attribute and relation types that have been defined for objects and subtypes of songs are not taken into account in this definition of rights.

The definition of rights is extended with the following filter:


The query filter includes all property types (attribute and relation types) that have been defined for objects or subtypes of songs. In the query filter definition, the parameter “Accessed element” and the setting “Search condition must be met” are used.

When the Rights folder is selected in the System area, the Saved test cases and Configure tabs are available in the main window. The test system area is found in the Saved test cases tab. The test system for triggers is called in the Triggers folder by means of the System area.

Saved test cases can be tested again here. The test interface in which the test cases can be defined can be called using the Open test environment button.

In addition to the functionalities that are described in the following chapters, Testing an access situation and Defining test cases, there is the option of testing access rights directly on an object or type. Select the access rights function using the context menu (right click). The following menu items can be selected there:

  • Object: All operations (modify, delete, read and display in graph editor) are tested on the object and their result is output.
  • All: All operations (modify, delete, read and display in graph editor) are tested on the object and all their properties (attributes and relations) are tested.
  • Rights system test environment: The test environment for checking rights opens.

Two areas are relevant for testing the rights system and the trigger functionality:

  • The actual test environment: The test environment offers the option to test the access rights or when a trigger is executed for a certain test case.
  • The Saved test cases tab: This lists the test cases and makes them available for subsequent checking.

Instructions for opening the test environment

  1. Select the folder Rights or Triggers in the Technical area in the Knowledge Builder.
  2. If you are working in the rights system, select the Saved test cases tab in the main window.
  3. Click Open test environment (bottom right) so that the test environment opens in a new window.

The test environment is comprised of several areas: The user and the element to which the property that is to be checked is attached is defined in the upper area. The elements can be an object, a type or a property (when this is transferred as an element).

The properties area lists all properties of the selected element. Non-italic properties are specific properties that are already on the object or the property. Italic properties, in contrast, are properties that can be created based on the schema, but have not yet been created. If creation of a new property is to be tested, the property in italics must be selected. 

The operation that is to be tested can be selected in the Operation window. Depending on the parameters selected, checking rights either is possible or not.

Please note: If a property of a property, this being a meta-property, is to be tested, then the property must be marked in the property window and the As element button must be selected. In the case of relations, for example, the specific relation between two objects or properties is selected as an object. All properties of the specific relation are now available in the properties window. (This can also be done with attributes.) The Sem. element button can be used to reverse this step.


The result of the test is displayed in the bottom window. The Check button must be selected for this. The results window displays all tested cases.

  • Element: the object, the type or the property on which the property is defined.
  • Property: the specific property that is to be tested (is blank when italic properties are tested)
  • Operation: that operation that is to be tested
  • Access allowed: the result of the test in the test case
  • Decision path: the corresponding folder which leads to the test result
  • Time: the time required for the rights check

Please note: When testing relations, the relation, the inverse relation and the both relations halves are generally tested separately. 

In order to monitor the functionality of the rights system, it is possible to save test cases. This is particularly important if changes are made to the rights system and you want to check afterwards whether the new result still matches the expected result. All saved test cases are displayed on the Saved test cases tab. There it is possible to check all test cases at the same time.

Instructions for defining a test case

  1. In the test environment, select the element and the property you wish to check.
  2. Select the operation to be tested.
  3. Press the Check button. Now the access rights are tested for the delivered parameters.
  4. In the results output, choose the test case you want to save. (You can only ever save one operation as a test case.)
  5. Press the Test case button. The selected test case is saved and is available for future checks.

Test multiple test cases simultaneously


Screenshot with saved test cases, the second test case is displayed in red.

All test cases whose test result matches the expected test result are displayed in green. If a test case is displayed in red, the result of the check differs from the expected test result. The expected test result is determined by the fact that the check of the test case was performed for the first time during the definition of the test case. The result of this first check is displayed as the expected result during later checks of the test case. In the test system, the expected result is either Access permitted or Access refused; for triggers, the expected result is either Execute script or “nothing happens” in the form of a hyphen.

Saved test cases can be deleted with Remove. If you want to edit a test case, you can use the Open test environment button to do so. In that case, all the parameters of the test case are transferred to the test environment. 

The view configuration makes it possible to configure various views of the data in i-views. The configured views are deployed in applications. It is possible, for example, to display sections of the semantic model or create specific compilations of data (e.g. in forms, tables, results lists etc.).

This allows us to answer the following questions, for example, and create the required views with view configurations:

  • How should the properties of specific objects be displayed?
  • In what order should the properties be displayed?
  • When we create a new object, which attributes and relations should be displayed in such a way that they cannot be overlooked and thus not filled out?
  • What should the list of objects for a type look like?
  • Should it even be a simple list, or should the objects be displayed in tables?
  • Which elements should be displayed in the individual columns?
  • Should relation targets be displayed directly? Or only specific attributes?
  • Should we define different tabs that summarize properties and attributes that go together? ...

Example: Specific persons have the properties Name, Age, Gender, Address, Phone number, Email, Cell number, Fax, knows, is friends with and is a colleague of. Now