Application Integration and Evolutionary Information Systems
Database systems also play a major role in application integration. The kernel of each integration project is the data integration, which requires on one hand the semantic mapping and on the other the inter-system synchronization. Data must be exchanged and kept consistent among applications. Here, the semantic integration of data types and instances requires a substantial manual effort. It is a must to search for methods and technologies that minimize this effort. An important constraint is the fact that commercial information systems undergo permanent change. The IT infrastructure of an enterprise must not inhibit this change, but must support organizational learning. In the context of research on evolutionary information systems, the chair deals with the issue of how the effort for demand-driven system evolution can be minimized and how organizational learning can be supported.
In the MedITalk project an ERP system for networks of medical offices is developed (in the frame of the leading-edge cluster on medical technology). From a research point of view, the evolvability is of prime interest. Autonomous medical-office systems and other data sources are the starting point, and they must remain in operation as they are. The system being developed enables cooperation without a complete integration in advance. Instead new methods and technologies are employed to allow a demand-driven soft migration towards systems parts cooperating better and better.
The project ProHTA (Prospective Health-Technology Assessment) is an interdisciplinary project in the leading-edge cluster on medical technology. This project addresses an early technology impact assessment in the public-health sector. From the data-management point of view, the development of methods and technologies for dealing with a strongly varying and dynamically growing information demand is of prime interest. In addition to that, issues of data quality play a significant role in this project. Especially the requirement to measure the quality of simulation results and furthermore to control them with a goal-oriented data management defines a great challenge.
Database systems today offer only limited support for the preservation of data quality. To cross the borders of individual database systems in guaranteeing high data quality in information systems, new methods and tools are required to support an encompassing Data-Quality Management in an appropriate way.
TDQMed is a BMBF-funded project in the leading-edge cluster on medical technology. Its goal is the analysis and optimization of test-data quality in the development of medical modalities. This includes the investigation of the specific quality measures required for test data (e.g. closeness to reality) and of the automatic generation of high-value test data.
The already mentioned project medITalk also has to cope with the quality of the data delivered by the medical offices. It has developed methods to forecast future deliveries in order to evaluate the completeness of the actual deliveries once they arrive.
Database and Data-stream Systems
Database systems allow the efficient management of structured data. However, database management systems (DBMS) are very large software packages with high resource consumption. The recently finished project CoBRA DB has provided a modular database management system that allows adapting data management to specific needs and unusual technological environments and leaving out the unused parts.
One highlight is the handling of time-based records that enter the system one by one and represent events. Sensor networks, for instance, generate such events in large numbers. Data-stream management systems (DSMS) support the efficient handling of such data.
The project DSAM (Data-Stream Application Manager, i6sdb) strives for the linkage of heterogeneous DSMS in order to exploit the strengths of each individual system. For that purpose, cost models are being developed for queries on data streams, which are then used in the optimization of data-stream processing. Furthermore, the improvement of data quality in data-stream systems and an efficient preprocessing in sensor networks are investigated. In the frame of the DFG Research Unit 1508 BATS, this is evaluated and advanced using the given example of bats observation.
A specific application area that also has to do with data streams and event processing are the so-called MMOGs (massively multiplayer online games). Here, extremely many players interact simultaneously in a shared virtual world. The state of that virtual world continuously changes because of the decentralized generation of events. In the context of the project i6 M2EtIS, the improvement of performance and scalability is investigated with the help of semantic classification of event types.
The SUSHy project deals with the processing of sensor data in large constructions, taking ships as the example. On the one hand, the system must be able to present the state of the construction quickly, on the other hand it should also allow for long-term evaluation w.r.t. maintenance and energy savings. In a test bench, data-stream systems, NoSQL databases, main-memory databases, and traditional relational DBMS are evaluated to find out whether they can meet these requirements.