Michael Philippsen
Prof. Dr. Michael Philippsen
Sekretariat
- Margit Zenk
- Phone number: +49 9131 85-27621
- Mobile phone: +491746083749
- Email: margit.zenk@fau.de
-
Verification and validation in industrial practice
(Own Funds)
Term: since 01.01.2022Detection of flaky tests based on software version control data and test execution history
Regression tests are carried out often and because of their volume also fully automatically. They are intended to ensure that changes to individual components of a software system do not have any unexpected side effects on the behavior of subsystems that they should not affect. However, even if a test case executes only unmodified code, it can still sometimes succeed and sometimes fail. This so-called "flaky" behavior can have different reasons, including race conditions due to concurrent execution or temporarily unavailable resources (e.g., network or databases). Flaky tests are a nuisance to the testing process in every respect, because they slow down or even interrupt the entire test execution and they undermine the confidence in the test results: if a test run is successful, it cannot necessarily be concluded that the program is really error-free, and if the test fails, expensive resources may have to be invested to reproduce and possibly fix the problem.
The easiest way to detect test flakyness is to repeatedly run test cases on the identical code base until the test result changes or there is a reasonable statistical confidence that the test is non-flaky. However, this is rarely possible in an industrial environment, as integration or system tests can be extremely time-consuming and resource-demanding, e.g., because they require the availability of special test hardware. For this reason, it is desirable to classify test cases with regard to their flakyness without repeated re-execution, but instead to only use the information already available from the preceding development and test phases.
In 2022, we implemented and compare various so-called black box methods for detecting test flakyness and evaluated them in a real industrial test process with 200 test cases. We classify test cases exclusively on the basis of generally available information from version control systems and test execution tools, i.e., in particular without an extensive analysis of the code base and without monitoring of the test coverage, which would in most cases be impossible for embedded systems anyway. From the 122 available indicators (including the test execution time, the number of lines of code, or the number of changed lines of code in the last 3, 14, and 54 days) we extracted different subsets and examined their suitability for detecting test flakyness using different techniques. The methods applied on the feature subsets include rule-based methods (e.g., "a test is flaky if it has failed at least five times within the observation window, but not five times in a row"), empirical evaluations (including the computation of the cumulative weighted "flip rate", i.e., the frequency of alternating between test success and failure) as well as various methods from the domain of machine learning (e.g., classification trees, random forest, or multi-layer perceptrons). By using AI-based classifiers together with the SHAP approach for explaining AI models we determined the four most important indicators ("features") for detecting test flakyness in the industrial environment under consideration. The so-called "gradient boosting" with the complete set of indicators has proven to be optimal (with an F1-score of 96.5%). The same method with only four selected features achieved just marginally lower accuracy and recall values (with almost the same F1 score).
Synergies of a-priori and a-posteriori analysis methods to explain artificial intelligence
Artificial intelligence is rapidly conquering more domains of everyday life and machines make more critical decisions: braking or evasive maneuvers in autonomous driving, credit(un)worthiness of individuals or companies, diagnosis of diseases from various examination results (e.g., cancer detection from CT/MRT scans), and many more. In order for such a system to receive trust in a real-life productive setting, it must be ensured and proven that the learned decision rules are correct and reflect reality. The training of a machine model itself is a very resource-intensive process and the quality of the result can usually only be quantified afterwards with extremely great effort and well-founded specialist knowledge. The success and quality of the learned model not only depends on the choice of a particular AI method, but is also strongly influenced by the magnitude and quality of the training data.
In 2022, we therefore examined which qualitative and quantitative properties an input set must have ("a priori evaluation") in order to achieve a good AI model ("a posteriori evaluation"). For this purpose, we compared various evaluation criteria from the literature and we defined four basic indicators based on them: representativeness, freedom from redundancy, completeness, and correctness. The associated metrics allow a quantitative evaluation of the training data in advance of preparing the model. To investigate the impact of poor training data on an AI model, we experimented with the so-called "dSprites" dataset, a popular generator for image files used in the evaluation of image resp. pattern recognition methods. This way, we generated different training data sets that differ in exactly one of the four basic indicators and have quantitatively different "a priori quality". We used all of them to train two different AI models: Random Forest and Convolutional Neural Networks. Finally, we quantitatively evaluated the quality of the classification by the respective model using the usual statistical measures (accuracy, precision, recall, F1-score). In addition, we used SHAP (a method for explaining AI models) to determine the reasons for any misclassification in cases of poor data quality. As expected, the model quality highly correlates with the training data quality: the better the latter is with regard to the four basic indicators, the more precise is the classification of unknown data by the trained models. However, a noteworthy discovery has emerged while experimenting with the lack of redundancy: If a trained model is evaluated with completely new/unknown inputs, the accuracy of the classification is sometimes significantly worse than if the available input data is split into a training and an evaluation data set: In the latter case, the a posteriori evaluation of the trained AI system misleadingly suggests a higher model quality.
Few-Shot Out-of-Domain Detection in Natural Language Processing Applications
Natural language processing (NLP for short) using artificial intelligence has many areas of application, e.g., telephone or written dialogue systems (so-called chat bots) that provide cinema information, book a ticket, take sick leave, or answer various questions arising during certain industrial processes. Such chat bots are often also involved in social media, e.g., to recognize critical statements and to moderate them if necessary. With increasing progress in the field of artificial intelligence in general and NLP in particular, self-learning models are spreading that dynamically (and therefore mostly unsupervised) supplement their technical and linguistic knowledge from concrete practical use. But such approaches are susceptible to intentional or unintentional malicious disguise. Examples from industrial practice have shown that chat bots quickly "learn" for instance racist statements in social networks and then make dangerous extremist statements. It is therefore of central importance that NLP-based models are able to distinguish between valid "In-Domain (ID)" and invalid "Out-Of-Domain (OOD)" data (i.e., both inputs and outputs). However, the developers of an NLP system need an immense amount of ID and OOD training data for the initial training of the AI model. While the former are already difficult to find in sufficient quantities, the a priori choice of the latter is usually hardly possible in a meaningful way.
In 2022, we therefore examined and compared different approaches to OOD detection that work with little to no training data at all (hence called "few-shot"). The currently best and most widespread, transformer-based and pre-trained language model RoBERTa served as the basis for the experimental evaluation. To improve the OOD detection, we applied "fine-tuning" and examined how reliable the adaptation of a pre-trained model to a specific domain can be done. In addition, we implemented various scoring methods and evaluated them to determine threshold values for the classification of ID and OOD data. To solve the problem of missing training data, we also evaluated a technique called "data augmentation": with little effort GPT3 ("Generative Pretrained Transformer 3", an autoregressive language model that uses deep learning to generate human-like text) can generate additional and safe ID and OOD data to train and evaluate NLP models.
Application of weighted combinatorics in the generation and selection of parameters and their representatives in software testing
Some functional testing methods (so-called black box tests), such as the equivalence class testing or boundary value analysis, focus on individual parameters. For these parameters, they determine representatives (values or classes of values) to be considered in the test. Since not just a single parameter but several parameters are usually required to perform such tests, representatives of several parameters must be combined with each other to be used for test execution. Well-understood combinatorial methods such as "All Combinations", "Pair-wise" or "Each choice" are usually used for this purpose. They do not take into account information about weights (attributes such as importance or priority) of the parameters and equivalence class representatives, which would affect the number of associated test cases (e.g. due to importance) or their recommended order (in terms of prioritization). In addition, in the case of the equivalence class method, there are scenarios in which a combination of several invalid classes in a single test case could optionally be explicitly desired, completely undesirable or limited to a certain number in order to specifically test fault combinations on the one hand, but also to simplify fault localization on the other. There is reason to believe that by considering such weights and options, more targeted and ultimately more efficient test cases can be derived.
In 2023, we evaluated and compared known combinatorial approaches that take into account weights when combining parameters or their values. Based on this, we developed a novel approach to generate and select parameters and their representatives in software testing. The proposed method uses a weighting system to prioritize the individual parameters, their equivalence classes and concrete representatives, in a set of test cases. If necessary, their interactions can also be specifically weighted in order to allow certain combinations to occur more frequently in the generated test cases. To evaluate the approach, we defined a suitable prototype data structure that represents the various weightings. We then implemented evaluation functions for existing sets of test cases in order to quantitatively determine how well such a test case set satisfies the specified combinatorics. In a further step, we used these evaluation functions in combination with various systematic methods and heuristics (SAT solver Z3, simulated annealing, and genetic algorithms) to generate new test cases that match the weighting or to optimize existing sets by adding missing test cases. Simulated Annealing was the fastest and gave the best results in the test series. Although the SAT-approach worked well for small problems, it was no longer practical for larger test cases due to exorbitant runtimes.
-
Promote computer science as the basis for successful STEM studies along the entire education chain.
(Third Party Funds Single)
Term: 01.11.2019 - 31.10.2022
Funding source: Bayerisches Staatsministerium für Wissenschaft und Kunst (StMWK) (seit 2018)
URL: https://www.ddi.tf.fau.de/forschung/laufende-projekte/cs4mints-informatik-als-grundlage-eines-erfolgreichen-mint-studiums-entlanProgressive digitalization is changing not only the job market but also the educational landscape. With funding from the DigitalPakt Schule and in detail from the BAYERN DIGITAL II program, serious changes in computer science education are being driven forward, which entail new challenges at the various levels of education.
The CS4MINTS project addresses these challenges along with the educational levels and ties in with measures already launched as part of the MINTerAKTIV project, such as strengthening the encounter of increasing student heterogeneity in the introductory computer science course.
For example, for promoting gifted students, the Frühstudium in computer science is actively promoted for girls, and the offer is explicitly expanded. A significant increase in the proportion of women in computer science is to be achieved in the long term through early action against gender-specific stereotypes regarding computer science and an expansion of the training program to include gender-sensitive computer science instruction in all types of schools.
The expansion of the compulsory subject of computer science in all schools also creates a great need for suitable teaching concepts and a strengthening of teacher training. For this purpose, a regional network is to be established during the project period to provide university-developed and evaluated teaching ideas for strengthening STEM in the curricular and extra-curricular settings.
In 2020, we began the initial piloting of the design to automate feedback in the introductory programming exercises. For this purpose, the return values of the JUnit tests of students' solutions were analyzed, and possible sources of errors were investigated. The next step is to work out a way to infer programming errors or student misconceptions based on these return values. Finally, these efforts aim to provide the students with automatically generated, competence-oriented feedback available to them after the programming tasks have been submitted (or, if necessary, already during the development phase). The feedback should show where errors occurred in the program code and point out possible causes.
Concerning handling heterogeneity, we have compared the Repetitorium Informatik (RIP) course content with the Bavarian curriculum of different school types in 2020. Subsequently, the content must be adapted so that first-year students from the most diverse educational backgrounds have equal opportunities to identify possible deficits through the Repetitorium and remedy them. Besides, a daily programming consultation hour was set up for the first time during the Repetitorium in the winter term 2020. Here, participants were able to ask questions and receive feedback on the assignments.
For many students, the initial steps of learning to program is one of the major challenges at the beginning of their studies. In order to provide additional feedback to novice programmers, we have designed and piloted the Feedback+ project in 2021. Within the framework of Feedback+, students have the opportunity to document problems that occur during the processing of the exercises or during the setup/use of the programming environment. They can also receive additional feedback in individual consultation sessions (weekly). For this purpose, we have set up a StudOn environment in which problems can be systematically documented. An initial evaluation in the form of individual interviews with the participating students received consistently positive feedback and is motivation to continue the project.
-
Cooperative Exploration and Analysis of Software in a Virtual/Augmented Reality Appliance
(Third Party Funds Group – Sub project)
Overall project: Cooperative Exploration and Analysis of Software in a Virtual/Augmented Reality Appliance
Term: 01.09.2018 - 31.12.2022
Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
URL: https://www2.cs.fau.de/research/Holoware/Understanding software has a large share in the programming efforts of a software systems, up to 30% in development projects and up to 80% in maintenance projects. Therefore, an efficient and effective way for comprehending software is necessary in a modern software engineering workplace. Three-dimensional software visualization already boosts comprehension and efficiency, so utilization of the latest virtual reality techniques seems natural. Within the scope of the Holoware project, we create an environment to cooperatively explore and analyze a software project using virtual/augmented reality techniques as well as artificial intelligence algorithms. The software project in question is being visualized in said virtual reality, such that multiple participants can simultaneously explore and analyze the software. They can cooperate by communicating about their findings. Different participants benefit from different perspectives on the software, which is augmented by domain specific additional information. This provides them with intuitive access to the structure and behaviour of the software. Various use cases are possible, for example the cooperative analysis of a run time anomaly in a team of domain experts. The domain experts can see the same static structure, augmented with domain specific and detailed information. In the VR environment, they can share their findings and cooperate using their different expertise.In addition, the static and dynamic properties of the software system are analyzed. Static properties include source code, static call relationships or metrics such as LoC, cyclomatic complexity, etc. Dynamic properties can be grouped into logs, traces, runtime metrics, or configurations that are read in at runtime. The challenge lies in aggregating, analyzing, and correlating this wealth of information. An anomaly and significance detection is developed that automatically detects both structural and runtime anomalies. In addition, a prediction system is set up to make statements about component health. This makes it possible, for example, to predict which components are at risk of failing in the near future. Previously, the log entries were added to the traces, creating a detailed picture of the dynamic call relationships. These dynamic relationships are mapped to the static call graph because they describe calls that do not result from the static analysis (for example, REST calls across several distributed components).
In 2018, the following significant contributions have been made:
- Development of a functional VR visualization prototype for demonstration and research purposes.
- Mapping between dynamic run time data and static structure (required by later analysis and visualization tasks).
- First draft and implementation of the trace anomaly detection by an unsupervised learning procedure. Evaluation and further improvements will follow in the coming months.
In 2019 we achieved the following improvements:
- Extension of the prototype to display dynamic software behaviour.
- Cooperative (remote-)usability of the visualization prototype.
- Interpretation of commit messages for anomaly detection.
- Clustering system calls according to use cases.
Our paper "Towards Collaborative and Dynamic Software Visualization in VR" has been accepted for publication at the International Conference on Computer Graphics Theory and Applications (VISIGRAPP) 2020. It presents the efficiency of our prototype at increasing the software understanding process. In 2020, our paper "A Layered Software City for Dependency Visualization" was accepted at the International Conference on Computer Graphics Theory and Applications (VISIGRAPP) 2021 and later received the "Best Paper Award". We demonstrated that our Layered Layout for Software Cities simplifies the analysis of software architecture and outperforms the standard layout by far. We successfully concluded the research project with a final prototype and the resulting publications.
In 2021, after the end of the official project funding we were asked to submit an extended version of the award paper (" Static And Dynamic Dependency Visualization In A Layered Software City") for review to a journal. Here we present a night view of the city that shows dynamic dependencies as arcs. We thus addressed a central, yet remaining issue: the visualization of dynamic dependencies. In the paper "Trace Visualization within the Software. City Metaphor: A Controlled Experiment on Program Comprehension" at the IEEE Working Conference on Software Visualization (VISSOFT), we displayed dynamic dependencies within the Software City by means of light intensities and were able to show that this representation is more helpful than drawing all dependencies. Also for this paper, we were invited to submit an extended journal article "Trace Visualization within the Software City Metaphor: Controlled Experiments on Program Comprehension" for review. This article demonstrates an extended visualization of dynamic dependencies and color arcs based on HTTP status codes.
In 2022, both journal papers were accepted: "Static And Dynamic Dependency Visualization in a Layered Software City" is published in Springer Nature Computer Science Journal and "Trace Visualization within the Software City Metaphor: Controlled Experiments on Program Comprehension" was accepted for the Information and Software Technology Journal. For the finalization of Holoware, all extensions were combined into one single visualization. For this purpose, different views were applied, allowing the user to switch between them: in the day view, the software architecture can be analyzed in the novel Holoware layered layout, and in the night view, dynamic dependencies are displayed. As part of a master thesis, Holoware was also implemented as an AR visualization, so that it can easily be used as a showcase or in everyday work.
In mid 2023, we finalized the project with the dissertation "Visualizing the statics, dynamics and infrastructure of software using the city metaphor". It summarizes all investigated aspects: (a) the static structure of the system to understand the software architecture, (b) the dynamics of the system to understand the dynamic dependencies (e.g. modern microservice architectures), and (c) the infrastructure of the system to analyze costs and promote the understanding of software operation. We also uncovered another use case: the use of Holoware at trade fairs. The visualization of the software makes it easy to get into conversation with other software developers, as the visualized software can be discussed immediately. To this end, we simplified the setup of the AR and VR visualization so that Holoware can easily be started without a lot of prior technical knowledge. In addition, we improved the contrast of the visualization to make it easier to recognize outlines and arcs, especially in very bright lighting conditions. -
Automatic Testing of Compilers
(Own Funds)
Term: since 01.01.2018
URL: https://www.cs2.tf.fau.de/forschung/projekte/autocomptest/Compilers for programming languages are very complex applications and their correctness is crucial: If a compiler is erroneous (i.e., if its behavior deviates from that defined by the language specification), it may generate wrong code or crash with an error message. Often, such errors are hard to detect or circumvent. Thus, users typically demand a bug-free compiler implementation.Unfortunately, research studies and online bug databases suggest that probably no real compiler is bug-free. Several research works therefore aim to improve the quality of compilers. Since the formal verification (i.e., a proof of a compiler's correctness) is often prohibited in practice, most of the recent works focus on techniques for extensively testing compilers in an automated way. For this purpose, the compiler under test is usually fed with a test program and its behavior (or that of the generated program) is checked: If the actual behavior does not match the expectation (e.g., if the compiler crashes when fed with a valid test program), a compiler bug has been found. If this testing process is to be carried out in a fully automated way, three main challenges arise:
- Where do the test programs come from that are fed into the compiler?
- What is the expected behavior of the compiler or its output program? How can one determine if the compiler worked correctly?
- How can test programs that indicate an error in the compiler be prepared to be most helpful in fixing the error in the compiler?
While the scientific literature proposes several approaches for dealing with the second challenge (which are also already established in practice), the automatic generation of random test programs still remains a challenge. If all parts of a compiler should be tested, the test programs have to conform to all rules of the respective programming language, i.e., they have to be syntactically and semantically correct (and thus compilable). Due to the large number of rules of "real" programming languages, the generation of such compilable programs is a non-trivial task. This is further complicated by the fact that the program generation has to be as efficient as possible: Research suggests that the efficiency of such an approach significantly impacts its effectivity -- in a practical scenario, a tool can only be used for detecting compiler bugs if it can generate many (and large) programs in short time.
The lack of an appropriate test program generator and the high costs associated with the development of such a tool often prevent the automatic testing of compilers in practice. Our research project therefore aims to reduce the effort for users to implement efficient program generators.
Large programs generated by efficient automatic generation of random test programs are difficult to use for debugging. Typically, only a small part of the program is the cause of the error, and as many other parts as possible must be automatically removed before the error can be corrected.
This so-called test case reduction also uses the solutions already mentioned for detecting the expected behavior so that a joint consideration makes sense.
Test case reduction is an essential component for automatically generated programs and should be designed to process error-triggering programs from all sources.Unfortunately, it is often unclear which of the various methods presented in the scientific literature is best suited to a particular situation. Additionally, test case reduction can be a time-consuming process. Our research project aims to create a significant collection of unreduced test cases and to use them to compare and improve existing procedures.
In 2018, we started the development of such a tool. As input, it requires a specification of a programming language's syntactic and semantic rules by means of an abstract attribute grammar. Such a grammar allows for a short notation of the rules on a high level of abstraction. Our newly devised algorithm then generates test programs that conform to all of the specified rules. It uses several novel technical ideas to reduce its expected runtime. This way, it can generate large sets of test programs in acceptable time, even when executed on a standard desktop computer. A first evaluation of our approach did not only show that it is efficient and effective, but also that it is versatile. Our approach detected several bugs in the C compilers gcc and clang (and achieved a bug detection rate which is comparable to that of a state-of-the-art C program generator from the literature) as well as multiple bugs in different SMT solvers. Some of the bugs that we detected were previously unknown to the respective developers.
In 2019, we implemented additional features for the definition of language specifications and improved the efficiency of our program generator. These two contributions considerably increased the throughput of our tool. By developing additional language specifications, we were also able to uncover bugs in compilers for the programming languages Lua and SQL. The results of our work led to a publication that we submitted at the end of 2019 (and which has been accepted by now). Besides the work on our program generator, we also began working on a test case reduction technique. It reduces the size of a randomly generated test program that triggers a compiler bug since this eases the search for the bug's root cause.
In 2020, we focussed on language-agnostic techniques for the automatic reduction of test programs. The scientific literature has proposed different reduction techniques, but since there is no conclusive comparison of these techniques yet, it is still unclear how efficient and effective the proposed techniques really are. We identified two main reasons for this, which also hamper the development and evaluation of new techniques. Firstly, the available implementations of the proposed reduction techniques use different implementation languages, program representations and input grammars. Therefore, a fair comparison of the proposed techniques is almost impossible with the available implementations. Secondly, there is no collection of (still unreduced) test programs that can be used for the evaluation of reduction techniques. As a result, the published techniques have only been evaluated with few test programs each, which compromises the significance of the published results. Furthermore, since some techniques have only been evaluated with test programs in a single programming language, it is still unclear how well these techniques generalize to other programming languages (i.e., how language-agnostic they really are). To close these gaps, we initiated the development of a framework that contains implementations of the most important reduction techniques and that enables a fair comparison of these techniques. In addition, we also started to work on a benchmark that already contains about 300 test programs in C and SMT-LIB 2 that trigger about 100 different bugs in real compilers. This benchmark not only enables conclusive comparisons of reduction techniques but also reduces the work for the evaluation of future techniques. Some first experiments already exposed that there is no reduction technique yet that performs best in all cases.
In this year, we also investigated how the random program generator that has been developed in the context of this research project can be extended to not only detect functional bugs but also performance problems in compilers. A new technique has been developed within a thesis that first generates a set of random test programs and then applies an optimization technique to gradually mutate these programs. The goal is to find programs for which the compiler under test has a considerably higher runtime than a reference implementation. First experiments have shown that this approach can indeed detect performance problems in compilers.
In 2021, we finished the implementation of the most important test case reduction techniques from the scientific literature as well as the construction of a benchmark for their evaluation. Building upon our framework and benchmark, we also conducted a quantitative comparison of the different techniques; to the best of our knowledge, this is by far the most extensive and conclusive comparison of the available reduction techniques to date. Our results show that there is no reduction technique yet that performs best in all cases. Furthermore, we detected that there are possible outliers for each technique, both in terms of efficiency (i.e., how quickly a reduction technique is able to reduce an input program) and effectiveness (i.e., how small the result of a reduction technique is). This indicates that there is still room for future work on test case reduction, and our results give some insights for the development of such future techniques. For example, we found that the hoisting of nodes in a program's syntax tree is mandatory for the generation of small results (i.e., to achieve a high effectiveness) and that an efficient procedure for handling list structures in the syntax tree is necessary. The results of our work led to a publication submitted and accepted in 2021.
In this year, we also investigated if and how the effectiveness of our program generator can be increased by considering the coverage of the input grammar during the generation. To this end and within a thesis, several context-free coverage metrics from the scientific literature have been adapted, implemented and evaluated. The results showed that the correlation between the coverage w.r.t. a context-free coverage metric and the ability to detect bugs in a compiler is rather limited. Therefore, more advanced coverage metrics that also consider context-sensitive, semantic properties should be evaluated in future work.
In 2022, we initiated the development of a new framework for the implementation of language-adapted reduction techniques. This framework introduces a novel domain-specific language (DSL) that allows the specification of reduction techniques in a simple and concise way. The framework and the developed DSL make is possible to easily adapt existing reduction techniques to the peculiarities and requirements of a specific programming language. It is our hope that such language-adapted reduction techniques can be even more efficient and effective than the existing, language-agnostic reduction techniques. In addition, the developed framework should also reduce the effort for the development of future reduction techniques; this way, our framework could make a valuable contribution to the research in this area.
In 2023, the focus of the research project was on list structures, which had already been briefly addressed in 2021:
Almost all methods investigated since 2021 group nodes in the syntax tree into lists in order to select only the necessary nodes from these lists using a list reduction. Our experiments have shown that in some cases 70% or more of the reduction time is spent on lists with more than 2 elements. These lists are relevant because there are several list reduction methods in the scientific literature, but they do not differ for lists with 2 or fewer elements. Since they take such a large fraction of time, we have worked on integrating these different list reduction methods into our implementations of the major reduction methods developed in 2020/2021. In addition to the methods found in the literature, we also considered methods that are only described on a website or whose source code is freely accessible.We also investigated how a list reduction can be interrupted at one point and resumed later. The idea was to reduce another list in the meantime, based on a prioritization, so that the list with the greater impact on the reduction always comes first. In some cases, the hoped-for speedup occurred, but questions remain that require further experiments with prioritizing reducers and interrupted list reduction methods.
-
OpenMP for reconfigurable heterogenous architectures
(Third Party Funds Group – Sub project)
Overall project: OpenMP für rekonfigurierbare heterogene Architekturen
Term: 01.11.2017 - 31.12.2023
Funding source: Bundesministerium für Bildung und Forschung (BMBF)
URL: https://www2.cs.fau.de/research/ORKA/High-Performance Computing (HPC) is an important component of Europe's capacity for innovation and it is also seen as a building block of the digitization of the European industry. Reconfigurable technologies such as Field Programmable Gate Array (FPGA) modules are gaining in importance due to their energy efficiency, performance, and flexibility.
There is also a trend towards heterogeneous systems with accelerators utilizing FPGAs. The great flexibility of FPGAs allows for a large class of HPC applications to be realized with FPGAs. However, FPGA programming has mainly been reserved for specialists as it is very time consuming. For that reason, the use of FPGAs in areas of scientific HPC is still rare today.
In the HPC environment, there are various programming models for heterogeneous systems offering certain types of accelerators. Common models include OpenCL (http://www.opencl.org), OpenACC (https://www.openacc.org) and OpenMP (https://www.OpenMP.org). These standards, however, are not yet available for the use with FPGAs.Goals of the ORKA project are:
- Development of an OpenMP 4.0 compiler targeting heterogeneous computing platforms with FPGA accelerators in order to simplify the usage of such systems.
- Design and implementation of a source-to-source framework transforming C/C++ code with OpenMP 4.0 directives into executable programs utilizing both the host CPU and an FPGA.
- Utilization (and improvement) of existing algorithms mapping program code to FPGA hardware.
- Development of new (possibly heuristic) methods to optimize programs for inherently parallel architectures.
In 2018, the following important contributions were made:
- Development of a source-to-source compiler prototype for the rewriting of OpenMP C source code (cf. goal 2).
- Development of an HLS compiler prototype capable of translating C code into hardware. This prototype later served as starting point for the work towards the goals 3 and 4.
- Development of several experimental FPGA infrastructures for the execution of accelerator cores (necessary for the goals 1 and 2).
In 2019, the following significant contributions were achieved:
- Publication of two peer-reviewed papers: "OpenMP on FPGAs - A Survey" and "OpenMP to FPGA Offloading Prototype using OpenCL SDK".
- Improvement of the source-to-source compiler in order to properly support OpenMP-target-outlining for FPGA targets (incl. smoke tests).
- Completion of the first working ORKA-HPC prototype supporting a complete OpenMP-to-FPGA flow.
- Formulation of a genome for the pragma-based genetic optimization of the high-level synthesis step during the ORKA-HPC flow.
- Extension of the TaPaSCo composer to allow for hardware synchronization primitives inside of TaPaSCo systems.
In 2020, the following significant contributions were achieved:
- Improvement of the Genetic Optimization.
- Engineering of a Docker container for reliable reproduction of results.
- Integration of software components from project partners.
- Development of a plugin architecture for Low-Level-Platforms.
- Implementation and integration of two LLP plugin components.
- Broadening of the accepted subset of OpenMP.
- Enhancement of the test suite.
In 2021, the following significant contributions were achieved:
- Enhancement of the benchmark suite.
- Enhancement of the test suite.
- Successful project completion with live demo for the project sponsor.
- Publication of the paper "ORKA-HPC - Practical OpenMP for FPGAs".
- Release of the source code and the reproduction package.
- Enhancement of the accepted OpenMP subset with new clauses to control the FPGA related transformations.
- Improvement of the Genetic Optimization.
- Comparison of the estimated performance data given by the HLS and the real performance.
- Synthesis of a linear regression model for performance prediction based on that comparison.
- Implementation of an infrastructure for the translation of OpenMP reduction clauses.
- Automated translation of the OpenMP pragma `parallel for` into a parallel FPGA system.
In 2022, the following significant contributions were achieved:
- Generation and publication of an extensive dataset on HLS area estimates and actual performance.
- Creation and comparative evaluation of different regression models to predict actual system performance from early (area) estimates.
- Evaluation of the area estimates generated by the HLS.
- Publication of the paper “Reducing OpenMP to FPGA Round-trip Times with Predictive Modelling”.
- Development of a method to detect and remove redundant read operations in FPGA stencil codes based on the polyhedral model.
- Implementation of the method for ORKA-HPC.
- Quantitative evaluation of that method to show the strength of the method and to show when to use it.
- Publication of the paper “Employing Polyhedral Methods to Reduce Data Movement in FPGA Stencil Codes”.
In 2023, the following significant contributions were achieved:
- Development and implementation of an optimization method for canonical loop shells (e.g. from OpenMP target regions) for FPGA hardware generation using HLS. The core of the method is a loop restructuring based on the polyhedral model that uses loop tiling, pipeline processing, and port widening to avoid unnecessary data transfers from/to the onboard RAM of the FPGA, increase the number of parallel active circuits, maximize data throughput to FPGA board RAM, and hide read/write latencies.
- Quantitative evaluation of the strengths and application areas of this optimization method using ORKA-HPC.
- Publication of the method in the conference paper "Employing polyhedral methods to optimize stencils on FPGAs with stencil-specific caches, data reuse, and wide data bursts".
- Publication of a reproduction package for the optimization method.
- Presentation of the method at the conference "14th International Workshop on Polyhedral Compilation Techniques" in a half-hour talk.
- Development of a method for the fully automatic integration of multi-purpose caches into FPGA solutions generated from OpenMP.
- Evaluation of multi-purpose caches in combination with HLS generated hardware blocks.
- Publication of the paper "Multipurpose Cacheing to Accelerate OpenMP Target Regions on FPGAs" (Best Paper Award).
-
Recurrent Neuronal Networks (RNNs) for Real-Time Estimation of Nonlinear Motion Models
(Third Party Funds Single)
Term: 01.10.2017 - 31.03.2021
Funding source: Fraunhofer-Gesellschaft
URL: https://www2.cs.fau.de/research/RuNN/With the growing availability of information about an environment (e.g., the geometry of a gymnasium) and about the objects therein (e.g., athletes in the gymnasium), there is an increasing interest in bringing that information together profitably (so-called information fusion) and in processing that information. For example, one would like to reconstruct physically correct animations (e.g., in virtual reality, VR) of complex and highly dynamic movements (e.g., in sports situations) in real-time. Likewise, e.g., manufacturing plants of the industry, which suffer from unfavorable environmental conditions (e.g., magnetic field interference or missing GPS signal), benefit from, e.g., high-precision goods location. Typically, to describe movements, one uses either poses that describe a "snapshot" of a state of motion (e.g., idle state, stoppage), or a motion model that describes movement over time (e.g., walking or running). In addition, human movements may be identified, detected, and sensed by different sensors (e.g., on the body) and mapped in the form of poses and motion models. Different types of modern sensors (e.g., camera, radio, and inertial sensors) provide information of varying quality.In principle, with the help of expensive and highly precise measuring instruments, the extraction of the poses and resp. of the motion model, for example, from positions on small tracking areas is possible without errors. Positions, e.g., of human extremities, can describe or be described by poses and motion models. Camera-based sensors deliver the required high-frequency and high-precision reference measurements on small areas. However, as the size of the tracking surface increases, the usability of camera-based systems decreases (due to inaccuracies or occlusion issues). Likewise, on large areas radio and inertial sensors only provide noisy and inaccurate measurements. Although a combination of radio and inertial sensors based on Bayesian filters achieves greater accuracy, it is still inadequate to precisely sense human motion on large areas, e.g., in sports, as human movement changes abruptly and rapidly. Thus, the resulting motion models are inaccurate.Furthermore, every human movement is highly nonlinear (or unpredictable). We cannot map this nonlinearity correctly with today's motion models. Bayes filters describe these models but these (statistical) methods break down a nonlinear problem into linear subproblems, which in turn cannot physically represent the motion. In addition, current methods produce high latency when they require accuracy.Due to these three problems (inaccurate position data on large areas, nonlinearity, and latency), today's methods are unusable, e.g., for sports applications that require short response times. This project aims to counteract these nonlinearities by using machine learning methods. The project includes research on recurrent neural networks (RNN) to create nonlinear motion models. As modern Bayesian filtering methods (e.g., Kalman and Particle filters) and other statistical methods can only describe the linear portions of nonlinear human movements (e.g., the relative position of the head w.r.t. trunk while walking or running) they are thus physically not completely correct.Therefore, the main goal is to evaluate how machine learning methods can describe complex and nonlinear movements. We therefore examined whether RNNs describe the movements of an object physically correctly and support or replace previous methods. As part of a large-scale parameter study, we simulated physically correct movements and optimized RNN procedures on these simulations. We successfully showed that, with the help of suitable training methods, RNN models can either learn physical relationships or shapes of movement.
This project addresses three key topics:
I. A basic implementation investigates how and why methods of machine learning can be used to determine models of human movement.
In 2018, we first established a deeper understanding of the initial situation and problem definition. With the help of different basic implementations (different motion models) we investigated (1) how different movements (e.g., humans: walk, run, slalom; vehicles: meander, zig-zag) affect measurement inaccuracies of different sensor families, (2) how measurement inaccuracies of different sensor families (e.g., visible orientation errors, audible noise, and deliberated artificial errors) affect human motion, and (3) how different filter methods for error correction (that balance accuracy and latency) affect both motion and sensing. In addition, we showed (4) how measurement inaccuracies (due to the use of current Bayesian filtering techniques) correlate nonlinearly with human posture (e.g., gait apparatus) and predictably affect health (simulator sickness) through machine learning.We studied methods of machine and deep learning for motion detection (humans: head, body, upper and lower extremity; vehicles: single- and bi-axial) and motion reconstruction (5) based on inertial, camera, and radio sensors, as well as various methods for feature extraction (e.g., SVM, DT, k-NN, VAE, 2D-CNN, 3D-CNN, RNN, LSTM, M/GRU). These were interconnected into different hybrid filter models to enrich extracted features with temporal and context-sensitive motion information, potentially creating more accurate, robust, and close to real-time motion models. In this way, these mechanics learned (6) motion models for multi-axis vehicles (e.g., forklifts) based on inertial, radio, and camera data, which generalize for different environments or tracking surfaces (with varying size, shape, and sensory structure, e.g., magnetic field, multipath, texturing, and illumination). Furthermore (7), we gained a deeper understanding of the effects of non-constant accelerated motion models on radio signals. On the basis of these findings, we trained an LSTM model that predicts different movement speeds and motion forms of a single-axis robot (i.e., Segway) close to real-time and more accurately than conventional methods.
In 2019, we found that these models can also predict human movement (human movement model). We also determined that the LSTM models can either be fully self-sufficient at runtime or integrated as support points into localization estimates, e.g., into Pedestrian Dead Reckoning (PDR) methods.
II. Based on this, we try to find ways to optimize the basic implementation in terms of robustness, latency, and reusability.
In 2018, we used the findings from I. (1-7) to stabilize so-called (1) relative Pedestrian Dead Reckoning (PDR) methods using motion classifiers. These enable a generalization to any environment. A deeper radio signal understanding (2) allowed to learn long-term errors in RNN-based motion models. This improves the position accuracy, stability, and a near real-time prediction. First experiments showed the robustness of the movement models (3) with the help of different real (unknown to the models) movement trajectories for one- and two-axis vehicles. Furthermore, we investigated (4) how hybrid filter models (e.g., interconnection of feature extractors such as 2D/3D-CNNs and time-series trackers such as RNNs-LSTM) provide more accurate, more stable, and filtered (outlier-corrected) results.
In 2019, we showed that models of the RNN family extrapolate movements into the future so that they compensate for the latency of the processing pipeline and beyond. Furthermore, we examined the explainability, interpretability, and robustness of the models examined here, and their reusability on the human movement.With the help of a simulator, we generated physically correct movements, e.g., positions of pedestrians, cyclists, cars, and planes. Based on this data, we showed that RNN models can interpolate between different types of movement and can compensate for missing data points, interpret white and random noise as such, and can extrapolate movements. The latter enables processing-specific latency to be compensated and enables human movement to be predicted from radio and inertial data in real time.Novel RNN architecture. Furthermore, in 2019, we researched a new architecture, or topology, of a neural network, that balances the strengths and weaknesses of flat neural networks (NN) and recurrent networks. We found this optimal NN for determining physically correct movement in a large-scale parameter study. In particular, we also optimized the model architecture and parameters for human-centered localization. These optimal architectures predict human movement far into the future from as little sensor information as possible. The architecture with the least localization error combines two DNNs with an RNN.Interpretability of models. In 2019, we examined the functionality of this new model. For this purpose, we researched a new process pipeline for the interpretation and explanation of the model. The pipeline uses the mutual information flow and the mutual transfer entropy in combination with various targeted manipulations of the hidden states and suitable visualization techniques to describe the state of the model at any time, both subjectively and objectively. In addition, we adapted a variational auto-encoder (VAE) to better visualize and interpret extracted features of a neural network. We designed and parameterized the VAE such that the reconstruction error of the signal is within the range of the measurement noise and at the same time forced the model to store disentangled features in its latent space. This disentanglement enabled the first subjective statements about the interrelationships of the features that are really necessary to optimally code the channel state of a radio signal.Compression. In 2019, we discovered a side effect of the VAE that offers the possibility of decentralized preprocessing of the channel information directly on the antenna. This compression then reduces the data traffic, lowers the communication load, and thus increases the number of possible participants in the communication and localization in a closed sensor network.Influence of the variation of the input information. In 2019, we also examined how changes in the input sequence length of a recurrent neural network affect the learning success and the type of results of the model. We discovered that a longer sequence persuades the model to be a motion model, i.e., to learn the form of movement, while with shorter sequences the model tends to learn physical relationships. The optimal balance between short and long sequences represents the highest accuracy.We investigated speed estimation using the new method. When used in a PDR model, this increased the position accuracy. An initial work in 2019 has examined in detail which methods are best suited to estimate the speed of human movement from a raw inertial signal. A new process, a combination of a one-dimensional CNN and a BLSTM, has replaced the state of the art.
In 2020, we optimized the architecture of the model,with regard to its prediction accuracy and investigated the effects of a deep fusion of Bayesian and DL methods on the prediction accuracy and robustness.
Optimization. In 2020, we improved the existing CNN and RNN architecture and proposed the fusion of ResNet and BLSTM. We replaced the CNN with a residual network to extract deeper and higher quality features from a continuous data stream. We showed that this architecture entails higher computing costs, but surpasses the accuracy of the state-of-the-art. In addition, the RNN architecture can be scaled down to counter the blurring of the context vector of the LSTM cells with very long input sequences, as the remaining ResNet network offers more qualitative features.Deep Bayesian Method. In 2020 we investigated whether methods of the RNN family can extract certain movement properties from recorded movement data to replace the measurement-, process-, and transition-noise distributions of a Kalman filter (KF). We showed that highly optimized LSTM cells can reconstruct trajectories more robust (low error variance) and more precise (positional accuracy) than an equally highly optimized KF. The deep coupling of LSTM in KF, so-called Deep Bayes, provided the most robust and precise positions and trajectories. This study also showed that methods trained on realistic synthetic data, the Deep Bayesian method, needed the least real data to adapt to a new unknown domain, e.g., unknown motion shapes and velocity distribution.III. Finally, a demonstration of feasibility shall be tested.
In 2018, a large-scale social science study opened the world's largest virtual dinosaur museum and showed that (1) a pre-selected (application-optimized) model of human movement robustly and accurately (i.e., without a significant impact on simulator sickness) maps human motion, resp. predicts it. We used this as a basis for comparison tests with other models that are human-centered and generalize to different environments.
In 2019, we developed two new live demonstrators that are based on the research results achieved in I and II. (1) A model railway that crosses a landscape with a tunnel at variable speeds. The tunnel represents realistic and typical environmental characteristics that lead to nonlinear multipath propagation of a radio transmitter to be located and ultimately to an incorrectly determined position. This demonstrator shows that the RNN methods researched as part of the research project can localize highly precisely and robustly, both on complex channel impulse responses and on dimensionally reduced response times, and also deliver better results than conventional Kalman filters. (2) We used the second demonstrator to visualize the movement of a person's upper extremity. We recorded human movement using inexpensive inertial sensors attached to both arm joints, classified using machine-based and deep learning, and derived motion parameters. A graphic user interface visualizes the movement and the derived parameters in near real time.The planned generalizability, e.g., of human-centered models and the applicability of RNN-based methods in different environments, has been demonstrated using (1) and (2).In 2019, we applied the proposed methods in the following applications:Application: Radio Signal. We classified the channel information of a radio system hierarchically. We translated the localization problem of a Line of Sight (LoS) and Non Line of Sight (NLoS) classifier into a binary problem. Hence, we now can precisely localize a position within a meter, based on individual channel information from a single antenna if the environment provides heterogeneous channel propagation.Furthermore, we simulated LoS and NLoS channel information and used it to interpolate between different channels. This enables the providers of radio systems to respond to changing or new environments in the channel information a-priori in the simulation. By selectively retraining the models with the simulated knowledge, we enabled more robust models.Application: Camera and Radio Signal. We have shown how the RNN methods relate to information from other sensor families, e.g., video images, by combining radio and camera systems when training a model, the two sensor information streams merge smoothly, even in the event of occlusion of the camera. This yields a more robust and precise localization of multiple people.Application: Camera Signal. We used an RNN method to examine the temporal relationships between events in images. In contrast to the previous work, which uses heterogeneous sensor information, this network only uses image information. However, the model uses the image information in such a way that it interprets the images differently: as spatial information, i.e., a single image, and as temporal information, i.e., several images in the input. This splitting implies that individual images can be used as two fictitious virtual sensor information streams to recognize results spatially (features) and to better predict them temporally (temporal relationships). Another work uses camera images to localize the camera itself. For this purpose, we built a new processing pipeline that breaks up the video signal over time and learns absolute and relative information in different neural networks and merges their outputs into an optimal pose in a fusion network.Application: EEG Signal. In a cooperation project we applied the researched methods to other sensor data. We recorded beta- and gamma-waves of the human brain in different emotional states. When used to train an RNN, it correctly predicted the emotions of a test person in 90% of all cases from raw EEG data. Application: Simulator Sickness. We have shown how the visualization in VR affects human perception and movement anomalies, resp. simulator sickness, and how the neural networks researched here can be used to predict the effects.In 2020, we developed a new live demonstrator based on the research results achieved in II.Application: Gait reconstruction in VR. In 2020, we used the existing CNN-RNN model to predict human movement, namely gait cycle and gait phases, using sensor data from a head-mounted inertial sensor to visualize a virtual avatar in VR in real time. We showed that the DL model has significantly lower latencies than the state-of-the-art, since it can recognize gait phases earlier and predict future ones more precisely. However, this is at the expense of the required computing effort and thus the required hardware.The project was successfully completed in 2021. In 2021, as part of a successfully completed dissertation, essential findings from the course of the project were linked and conclusions were drawn, and numerous research questions were addressed and answered.
As part of the research project, more than 15 qualification theses, 6 patent families, and more than 20 scientific publications were successfully completed and published. The core contribution of the project is the knowledge of the applicability and pitfalls of recurrent neural networks (RNN), their different cell types and architectures, in different application areas. Conclusion: The ability of the RNN family to deal with dynamics in data streams, e.g., failures, delays, and different sequence lengths in time series data, makes them indispensable in a large number of application areas today.
The project is continued within the framework of seminars at the FAU and extracurricular research activities at Fraunhofer IIS within the framework of the ADA Lovelace Center.
In 2022, time series augmentation was investigated. For this purpose, various generative methods, namely Variational Autoencoder (VAE) and Generative Adversarial Networks (GAN), were evaluated for their ability to generate time series of different application domains, e.g., features of radio signals, e.g., signal strength, channel impulse response, characteristics of GNSS spectra and multidimensional signals from inertial sensors. A novel architecture called ARCGAN was proposed, which combines all the known advantages of state-of-the-art methods and can therefore generate significantly more similar (effective) time series than the state-of-the-art.
In 2023, we investigated generative methods based on attention mechanisms, transformer architectures, and GPT with respect to their predictive performance for time series. To this end, we evaluated methods such as Legendre Units (LMU), novel transformer architectures, and TimeGPT to better forecast localization information. We could show that using appropriate input prompts and calibration, preconfigured GPT models can be adapted to new areas of application, to make the training significantly more efficient, and to also save energy.In 2024, we further examine GPT-like models for their uncertainty, explainability, and adaptability. In addition, we analyze the feasibility of these generative methods in relation to various fields of application, e.g., forecasting and anomaly detection, anomaly characterization, anomaly localization, and anomaly mitigation.
-
Computer Science basics as an essential building block of modern STEM field curricula
(Third Party Funds Single)
Term: 01.10.2016 - 30.09.2019
Funding source: Bayerisches Staatsministerium für Bildung und Kultus, Wissenschaft und Kunst (ab 10/2013)
URL: https://www2.cs.fau.de/research/GIFzuMINTS/The increasing digitalization of all areas of science and life render competencies in the foundations of computer science essential for all tech students and more. For the success of their academic studies, often their courses, especially the introductory ones, are problematic hurdles that may lead to a dropout.
For this reason, this project expands the support that students get while they are still at school, while transitioning from school to university, and during the introductory phase. To address the study orientation phase when future STEM students are still at school, we (a) use our regional and national contacts to provide support for seminars and we (b) offer advanced training for teachers as they act as multipliers when future students choose their degrees. To address the transition from school to university, we focus on the fact that freshmen show up with different previous knowledge. We offer revision courses to bring the students onto the same page, i.e., to make their knowledge more homogeneous. In the introductory phase, special intensification exercises and tutoring that take heterogeneity into account strive to lower the dropout rates.
In 2018, one focus was to evaluate the effectiveness of our measures: the increased range of exercise groups, the more extensive support from the tutors, the correlation between exercise attendence and dropout rate, the effects of participation in the revision courses on the performance in the exercises and in the exam, etc.
In order to attract and qualify teachers as multipliers, we expanded the range of advanced training courses for teachers: we demonstrated innovative approaches, examples and content for teaching so that the participants can pass on to their students what they have learned themselves.
To quantitatively and qualitatively improve the W seminar papers written in computer science at school we compiled a 24-page brochure and sent it to schools in surrounding counties. This brochure supports teachers in the design and implementation of W seminars in IT by providing subject suggestions, tips, and a checklist for students.
The GIFzuMINTS project ended in 2019 with a special highlight: On May 20, 2019, the Bavarian Minister of State for Science and Art, Bernd Sibler, and the deputy general manager of vbw bayme vbm, Dr. Christof Prechtl, visited us in a status meeting. Minister Bernd Sibler was impressed: "The concept of the FAU is perfectly tailored to the requirements of a degree in computer science. The young students are supported from the very beginning immediately after finishing school. That is exactly our concern, which we pursue with MINTerAKTIV: We want every student to receive the support she/he needs to successfully complete his/her academic studies."
By the end of the project, the measures developed and implemented were thoroughly evaluated and established as permanent offers. The revision course on computer science was transformed into a continuous virtual offer for self-study and updated to the latest state of the art. The course for talented students that prepares them to participate in international programming competitions was expanded and set up as a formal module of the curriculum. In order to ensure that the measures sustain, we applied for subsequent funding, which has already been approved as CS4MINTS. -
Adaptive Algorithms for RF-based Locating Systems
(Third Party Funds Single)
The goal of this project is the development of adaptive algorithms for radio-based realtime locating systems. In the scope of this project we cover three essential topics:
Automatic configuration of event detectors. In previous research projects we built the basics for the analysis of noisy sensor data streams. However, event
detectors still need to be parameterized carefully to yield satisfying results. This work package explores the possibilities of an automatic configuration of the event detectors on existing sensor and event data streams.
In 2016 we investigated concepts to extract optimal configurations from available sensor data streams. For soccer sport scientists manually annotated matches and scenes (e.g. player A kicks the ball with his/her left foot at time t). These manually annotated scenes may later by used to optimize the hierarchy of event detectors.Evaluation of machine learning techniques for locating applications. In previous research projects we already developed machine learning algorithms for radio-based locating systems (e.g., evolutionary algorithms to estimate antenna positions and orientations). This work package investigates further approaches that use machine learning to enhance the performance of realtime locating systems.
In 2016 we evaluated concepts to replace parts of the position estimation algorithms by machine learning algorithms. Up to now a signal processing chain (analog/digital conversion, time of arrival estimation, Kalman filtering, motion estimation) uses the raw sensor data to calculate a position. This often results in high installation and configuration costs for the setup of locating systems in the application environment.Evaluation of vision-based techniques to support radio-based realtime locating.
Radio-based locating systems have strengths if objects are occluded as microwaves may pass through the occluding objects. However, metallic surfaces in the environment pose challenges as they reflect RF-signals. Hence, the RF-signal that a transmitter emits arrives at the antennas over multiple paths. It is then often difficult to extract the directly received parts of the signal at the antenna and hence it is a challenge to properly estimate the distance between the antenna and the emitter. In this work package we investigate vision-based locating techniques that may help RF-based systems in calculating positions.
In 2016 we developed two systems: CNNLok may be used by objects carrying a camera (self-localization), i.e., inside-out tracking, whereas InfraLok uses cameras installed in the environment to track objects with infrared light. CNNLok uses a convolutional neural network (CNN) that is trained on several camera images taken in the environment (at known places). At runtime the CNN receives a camera image and calculates the position of the camera. InfraLok detects infrared LEDs using a multi-camera system and calculated the position of objects in space. -
Software Watermarking
(Own Funds)
Term: since 01.01.2016
URL: https://www2.cs.fau.de/research/SoftWater/Software watermarking means hiding selected features in code, in order to identify it or prove its authenticity. This is useful for fighting software piracy, but also for checking the correct distribution of open-source software (like for instance projects under the GNU license). The previously proposed methods assume that the watermark can be introduced at the time of software development, and require the understanding and input of the author for the embedding process. The goal of our research is the development of a watermarking framework that automates this process by introducing the watermark during the compilation phase into newly developed or even into legacy code. As a first approach we studied a method that is based on symbolic execution and function synthesis.
In 2018, two bachelor theses analyzed two methods of symbolic execution and function synthesis in order to determine the most appropriate one for our approach. In 2019, we investigated the idea to use concolic execution in the context of the LLVM compiler infrastructure in order to hide a watermark in an unused register. Using a modified register allocation, one register can be reserved for storing the watermark. In 2020, we extended the framework (now called LLWM) for automatically embedding software watermarks into source code (based on the LLVM compiler infrastructure) with further dynamic methods. The newly introduced methods rely on replacing/hiding jump targets and on call graph modifications. In 2021, we added other adapted, dynamic methods that have already been published, as well as a newly developed method to LLWM. The added methods are based, among other things, on the conversion of conditional constructs into semantically equivalent loops or on the integration of hash functions, that leave the functionality of the program unchanged but increase its resilience. Our newly developed method IR-Mark now not only specifically selects the functions in which the code generator avoids using a certain register. IR-Mark now adds some dynamic computation of fake values that makes use of this register to blurr what is going on. There is a publication on both LLWM and IR-Mark. In 2022, we added another adapted procedure to the LLWM framework. The method uses exception handling to hide the watermark. In 2023, we adapted more methods to expand the LLWM framework. These include embedding techniques based on principles of number theory and aliasing. -
Automatic Detection of Race-Conditions
(Own Funds)
Term: 01.01.2016 - 30.09.2021
URL: http://www2.informatik.uni-erlangen.de/research/AuDeRace/Large software projects with hundreds of developers are difficult to review and often contain many bugs. Automatic tests are a well established technique to test sequential and deterministic software. They test the whole project (system test) or each module by itself (unit test). However, recent software contains more and more parallelism. This introduces several new bug patterns, like deadlocks and concurrent memory accesses, that are harder or even impossible to be detected reliably using conventional test methods. Whether the faulty behavior actually shows at runtime depends on the concrete scheduling of the threads which is indeterministic and varies between individual executions depending on the underlaying system. Due to this unpredictable behavior such bugs do not necessarily manifest in an arbitrary test run or may never arise in the testing environment at all. As a result, conventional tests are not well suited for modern, concurrent software.
With the project AuDeRace, we develop methods to efficiently and reliably detect concurrent bugs while keeping the additional effort for developers as low as possible. In an initial approach we define a testing framework that allows the specification of a scheduling plan to regain deterministic execution. However, a major problem still remains: The developer has to identify and implement well suited test cases that cover the potential fault in the program and execute them in a special deterministic way in order to trigger the failure. Especially in the context of concurrency, it is difficult to visualize the behavior of a program and identify the problematic parts. To overcome this, the critical parts shall automatically be narrowed down before even writing dedicated test cases. Existing approaches and tools for this purpose generate too many false positives or the analysis is very time consuming, making their application to real world code prohibitive. The goal of this project is to generate less false positives and increase the analysis speed by combining existing static and dynamic analysis. This allows for the efficient use in not only small example codes but also large and complex software projects.In 2016 existing approaches were studied regarding their usability as a starting basis for our project. The most promising method uses model checking and predefined assertions to construct thread schedules that trigger the faulty behavior. However, the approach is currently infeasible for larger projects because only very small codes could be analyzed in reasonable time. Therefore, we focused on automatically detecting and removing statements that are unrelated to the parallelism respectively to the potentially faulty code parts in order to decrease the execution time of the preliminary static analysis.
In 2017 the work on automatically reducing programs to speed up furher analysis was continued. Furthermore, we evaluated whether the concept of mutation testing can be applied to parallel software as well. The results indicate that this extension is indeed possible to rate tests qualitativly. However, to complete the analysis for larger programs in reasonable time, a few heuristics need to be applied during the process.
In 2018 the focus moved to a deterministic execution of test cases. A concept to reproduce results during the execution was developed: In addition to the test case, a schedule specifies the dynamic behaviour of the threads. Instrumenting the code at previously marked positions and other relevant byte code instructions allows a separate control thread to enforce the schedule. When modifying the source code, the marked positions in the code need to be updated as well to keep them consistent with the test cases. A merging technique similar to the ones used in version control systems shall be used to automatically update the positions.
Up to 2019, this project was a contribution of the Chair of Computer Science 2 (Programming Systems) to the IZ ESI (Embedded Systems Initiative, http://www.esi.fau.de/ ). In this context, several improvements for the quality of concurrent software were analyzed. The take-away result was that different approaches are applicable and required, but they also often suffer from long analysis times.
Beyond the ESI project, we improved the usefulness of mutation testing by developing a tool for equivalence detection and test case generation. A submitted paper got accepted.
In 2020, we studied and evaluated approaches to detect external race conditions. Whereas in classic race conditions, several threads of the analyzed software fail to work together properly, in external race conditions, software interacts with independent, unknown components. Examples are other programs, the operating system, or even malicious code written by attackers who interferes with the analyzed software.
In 2021, we developed a system to statically detect race conditions when using external resources. A common pattern is to check properties of files and later access the files again, assuming the previously determined properties still hold. However, if the file is modified in the meantime, numerous problems can occur (time-of-check to time-of-use). Besides unexpected result, attackers can even modify the file to enforce malicious behavior and compromise the system. With our approach, such vulnerable codes can be detected in the software. -
Parallel code analysis on a GPU
(Own Funds)
Term: 01.07.2013 - 30.09.2020
URL: https://www2.cs.fau.de/research/ParCAn/In compiler construction there are analyses that propagate information along the edges of a graph and modify it, until a fix point is reached and the information no longer changes. In this project we built the ParCAn framework to accelerate such analyses by exploiting the massive parallelism of graphic cards.
In 2016, our research focus was on synchronization mechanisms for GPUs. Known synchronization methods for CPUs (e. g. a spin lock) cannot be used without further adjustment on GPUs since their special architectural properties easily lead to dead- and livelocks. Synchronization is required (even for predominantly dataparallel graph implementations) if data dependences occur dynamically. We have therefore developed a novel synchronization mechanism which solves two non-trivial problems related to GPUs: First, we prevent dead- and livelocks. Second, we retain as much parallelism as possible by allowing dataparallel threads to work on disjoint areas of a data structure concurrently. For example, think of threads that modify disjoint locations of a graph without affecting its structural integrity. In our approach, a programmer can provide rules that describe the conditions under which a parallel access is allowed. At runtime, we check these rules and determine how many threads can run in parallel.
We currently extend the above synchronisation mechanism with a scheduler that redistributes conflicting data access so that the SIMD execution on a GPU causes less serialization than without the re-ordering. Hence, the degree of parallelism grows. The underlying idea exploits that GPUs organize threads in hierarchical units. If the above synchronization mechanism detects a conflicting access in one of these units, it checks on the next smaller unit whether the conflict can also be found there. If this is not the case, the fewer threads in that unit can run in parallel. This is much better than serializing all threads in the enclosing unit. In this situation it is the scheduler’s task to re-distribute the detected collisions across the units so that as many threads as possible can run in parallel. As the scheduling is performed at run-time it needs to be efficient, must itself run in parallel, and potentially make use of the dynamic thread creation capabilities of modern GPUs.
Graphs are fundamental data structures to represent relations between data (e.g., social networks, web link analysis). Graphs can have millions/billions of vertices/edges. GPUs can process graphs with 1000th of threads in parallel very efficiently. Graph Analyses use the Bulk Synchronous Parallel Model (BSP) which divides the analysis into three strictly separated phases: computation, communication and synchronization. The two latter ones require communications with the host system (CPU) that slow down execution. Our GPU-based compiler works after the BSP model too. Internally the code is represented as (control flow-) graph. This graph is transferred to the GPU and gets analyzed. Every code modification triggers this cycle. The Graph has thus to be generated and transferred to the GPU very fast.
Publications in the field of graph-analysis focus on optimizing the computation time. The end-to-end execution time (including communication and synchronization) is ignored but has a strong impact on the run-time. Our compiler considers every phase of the BSP. In 2017 we published a paper that significantly reduces the time for synchronization.
In addition, we focus on speeding up of the communication phase of the BSP model. Communication here means the transfer of the graph in both directions (GPU Host). While the graph and data structure used has strong impact on the run-time behavior it also influences the computation phase. Since there is no publication in the literature that systematically investigates the impact of the data-structure on the end-to-end run-time of a GPU graph analysis, we implemented a number of benchmarks that use different attributes of graphs (e.g., access successor/predecessor, random node access) and eight different graph data structures to represent graphs on the GPU. For the measurements we used a number of structurally different graphs. The results are likely to help developers in picking the right graph data structure for their GPU-problem.In 2018 we completed our comparative study on the efficiency of graph data structures on GPUs. To show the effectiveness of our framework we integrated it into the LLVM compiler framework. We picked four LLVM analyses and parallelized them with ParCAn. Ample measurements show that our framework can accelerate LLVM’s compilation process by up to 40%. A publication was accepted at the 28th International Conference on Compiler Construction and will receive the Best-Paper-Award.
In 2019, ParCAn was adjusted to the new execution model of NVIDIA’s latest GPU architectures. With the introduction of the Volta architecture, threads can now achieve progress independently of the others. Since Volta every thread has its own program counter and call stack. Previously, a group of threads (called a warp) shared both a common program counter as well as a call stack. The threads either executed the same instruction or were idle (lock-step execution). Applications that are not adjusted to this execution model will compute wrong results. As threads now execute independently of each other, race conditions can occur within warps. Older lock-step fashioned execution models inserted synchronization points to prevent this implicitly. We inspected ParCAn’s source code for code fragments susceptible to causing race conditions on new architectures. These fragments were adjusted to now execute properly on the latest NVIDIA architectures.
In 2020, we successfully completed this research project. We demonstrated that parallelizing the particularly cost-intensive data flow analyses can speed-up the compilation process of up to 31%. Thus, our research leads the way towards parallelized compilers that meet the requirements of today's software projects. The importance of this research topic was underlined by a "Best Paper Award" at the renowned "Compiler Construction" conference, see references.The use of the GPU as the target architecture raised other research-related questions that were also published.
Some analyses store their information in a global data structure that can be modified by all threads simultaneously. Especially the high number of concurrent threads on a GPU demands for efficient synchronization. Thus, as part of the research project we implemented an efficient framework for establishing mutual exclusion, see the LNCS-paper in the references. Previous approaches inevitably resulted in deadlocks when the GPU is fully utilized. Moreover, by using a variant of the inspection-execution paradigm we further improved the efficiency of the framework.
Another research topic considered the efficiency of graph structures on GPUs. At its core, ParCAn implements a graph traversal algorithm. The program to be translated is converted into a graph, the control flow graph (CFG), on which the analyses are executed. Due to the large number of parallel accesses, the CFG represents a critical data structure for the performance of ParCAn. For this reason, we conducted an extensive study comparing the performance of graph data structures. We used the results to determine the best possible data structure to represent the CFG. We derived general criteria that allow to make assumptions about the performance of a data structure under certain conditions. Even outside of the context of ParCAN, developers can use these criteria, represented as a decision tree, to choose the most appropriate data structure for their static graph algorithms. The results of the study were presented at the GPGPU workshop, see references.
-
Design for Diagnosability
(Third Party Funds Single)
Term: 15.05.2013 - 30.09.2018
Funding source: Bayerisches Staatsministerium für Wirtschaft und Medien, Energie und Technologie (StMWIVT) (ab 10/2013)
URL: http://www2.informatik.uni-erlangen.de/research/DfD/Many software systems behave obtrusively during the test phase or even in normal operation. The diagnosis and the therapy of such runtime anomalies is often time consuming and complex, up to being impossible. There are several possible consequences for using the software system: long response times, inexplicable behaviors, and crashes. The longer the consequences remain unresolved, the higher is the accumulated economic damage.
"Design for Diagnosability" is a tool chain targeted towards increasing the diagnosability of software systems. By using the tool chain that consists of modeling languages, components, and tools, runtime anomalies can easily be identified and solved, ideally already while developing the software system. Our cooperation partner QAware GmbH provides a tool called Software EKG that enables developers to explore runtime metrics of software systems by visualizing them as time series.
The research project Design for Diagnosability enhances the eco-system of the existing Software EKG. The Software-Blackbox measures technical and functional runtime values of a software system in a minimally intrusive way. We store the measured values as time series in a newly developed time series database, called Chronix. Chronix is an extremely efficient storage of time series that optimizes disk space as well as response times. Chronix is an open source project (www.chronix.io) and is free to use for everyone.
The newly developed Time-Series-API analyzes these values, e.g., by means of an outlier detection mechanism. The Time-Series-API provides multiple additional building blocks to implement further strategies for identifying runtime anomalies.
The mentioned tools in combination with the existing Software EKG will become the so-called Dynamic Analysis Workbench. This tool enables developers to diagnose, explain, and fix any occurring runtime anomalies both quickly and reliably. It will provide diagnosis plans to localize and identify the root causes of runtime anomalies. The full tool chain aims at increasing the quality of software systems, particularly with respect to the metrics mean-time-to-repair and mean-time-between-defects.Before we have successfully completed the project in July 2016, we have made the following contributions:
- We have linked Chronix and a framework for distributed data processing so that our anomaly analyses now scale to huge sets of time series data.
- We extended Chronix with additional components. Among them are, for example, a more efficient storage model, some adapters for more time series databases, additional server-side analysis functions, and some new time series types.
- We have published our benchmark for time series databases.
- We have investigated and implemented an approach to link application-level calls, e.g., a login of a user, down to the resulting calls on the OS level.
Although funding expired in 2016, we made further contributions in 2017:
- We presented Chronix at the FAST conference in Santa Clara, CA in February 2017.
- We have equipped Chronix with interfaces to attach time series databases that are used in the industry.
- We have developed an approach that determines the ideal cluster configuration (w.r.t. processing time and costs) for a given analysis (specific function and set of time series).
- We have expanded Spark, a framework for distributed processing of large-scale data, so that it now can make use of GPUs in distributed time series analyses. We presented the results at the Apache Big Data Conference in Miami, Florida, in May 2017.
We continued to make further contributions to the research project in 2018:
- We have published a paper at PROFES 2018 that describes techniques and insights on how runtime data in a large software project can be offered to all project participants at the development stage to improve their collaboration.
- We have maintained the Chronix Open Source project and stabilized it further (updating versions, fixing bugs, etc.).
-
Echtzeitkritische Systeme
(Third Party Funds Single)
Term: 01.01.2013 - 31.12.2013
Funding source: Industrie -
Techniques and tools for iterative development and optimization of software for embedded multicore systems
(Third Party Funds Single)
Term: 15.10.2012 - 30.11.2014
Funding source: Bayerisches Staatsministerium für Wirtschaft, Infrastruktur, Verkehr und Technologie (StMWIVT) (bis 09/2013)Multicore processors are of rising importance in embedded systems as these processors offer high performance while maintaining low power consumption. Developing parallel software for these platforms poses new challenges for many industrial sectors because established tools and software libraries are not multicore enabled. The efficient development, optimization and testing of multicore-software is still open research, especially for reliable real-time embedded systems.In the multi-partner project "WEMUCS" [http://www.multicore-tools.de/] new methods and tools for efficient iterative development, optimization, and testing of multicore software have been created over the past two years. Innovative tools and technologies for modeling, simulation, visualization, tracing, and testing have been developed and integrated into a single tool chain. Using case studies from different industries (automotive, telecommunications, industry automation) these tools were evaluated and improved.
Although several well-known methods for test case generation and best practice coverage measures for classical single-core applications exist, no such methods for multi-core software have established themselves yet. Unfortunately, it is the interaction of concurrent threads that can cause faults that cannot be discovered by testing the individual threads in isolation. As part of the WEMUCS project (more precisely: work package AP3 [http://www.multicore-tools.de/de/test.html]) and based on an industrial size case study, we developed a generic technique (called a "testing pipeline" below) that systematically creates test cases to find and analyze the impact of concurrent side effects.
To evaluate the new testing pipeline (including the automated parallelization of sequential code) on real world examples, our project partner Siemens created a complete model of a large luggage conveyor belt, including the code to control the belt. Such a luggage conveyor can be found at every airport. The case study's model is used to automatically derive luggage conveyor belt systems of different sizes, i.e., built from an arbitrary number of feeder or outlet belts. The hardware of the conveyor belt is emulated on the SIMIT simulation tool from Siemens and the control software (written in the programming language AWL) runs on a software-based PLC.
The first step of the testing pipeline converts the AWL code into a more comprehensible and human-readable programming language called HLL. We have completed this converter during this reporting period. In step two, our tool then transforms previously sequential parts of the AWL code into HLL units that are executed in parallel. When applied to a luggage conveyor belt system built from eight feeders, eight outlets and an inteconnecting circular belt of straight and curved segments, our tool automatically transcoded 11.704 lines of sequential AWL code into 34 KB of parallel HLL code.
In step three another tool developed by the chair then analyzes the HLL code and automatically generates a testing model. This model represents the interprocedural control flow of the concurrent subroutines and also holds all the thread switches that might be relevant for testing. The testing model consists of a set of hierarchically organized UML activities (currently encoded as an XMI document that can be imported into Enterprise Architect by Sparx Systems). When applied to the case study outlined above, our tool automatically generates 103 UML activity diagrams (with 1,302 nodes and 2,581 edges).
Step four is optional. The tester can manually adapt the testing model as needed (e.g. by changing priorities or inserting additional verification points). Then the completed model can be loaded into the MBTsuite tool, a model-based testing tool developed by our project partner sepp.med GmbH. This tool is highly configurable to generate test cases that cover as many parts of the testing model as possible. We ran MBTsuite on a standard PC and applied it to the testing model of our case study; within six minutes MBTSuite generated a highly optimized test set consisting of only 10 test cases that nevertheless cover 99% of the nodes and 78% of the edges.
In cooperation with our project partner sepp.med GmbH we built two export modules for the above-mentioned MBTsuite. One module outputs the generated test cases as a human-readable spreadsheet, the other module outputs an executable test set in the Java language. The exported spreadsheet contains one individual sheet per test case, with one column per thread. The rows visualize the thread interleaving that gets tested. The Java file holds a Java class with a test method per test case. Every method holds a sequence of test steps that discretely control thread interleavings. This way, each test case execution leads to a unique and reproducible execution of the parallel System Under Test (SUT) written in HLL. Each run of a test instructs our HLL emulator to load and initialize the SUT and each subsequent test step instructs the HLL emulator to execute a certain set of instructions from a certain thread. During this fully controlled execution, each test case emits a detailed protocol of its execution for the final visualization step.
A plug-in from sepp.med that creates a visualization layer in Enterprise Architect's UML editor visualizes the resulting log file. Colored nodes and edges tell the user which control flow paths a test has covered. If a test case "fails" (if a race condition or a logic error is found in the tested program), its graphical trace ends at the failing statement. The tester can then follow the control flow back in time in order to understand the underlying reason for the failure.
We have implemented a prototype of the full testing pipeline and demonstrated its applicability to an industrial size case study. This tool is a major contribution to testing concurrent code for embedded systems. It is a contribution of the Programming Systems Group to the "ESI initiative" [http://www.esi-anwendungszentrum.de].
-
Incremental Code Analysis
(Own Funds)
Term: 01.04.2012 - 30.06.2017
URL: https://www2.cs.fau.de/research/InCA/To ensure that errors in a program design are caught early in the development process, it is useful to detect mistakes already during the editing of the code. For that the employed analysis has to be fast enough to enable interactive use. One method to achieve this is the use of incremental analysis, which combines analysis results of parts of the program to analyze the whole program. As an advantage, it is then possible to re-use large parts of the analysis results when a small change to the program occurs, namely for the unaffected parts of the program and for libraries. Thus the work required for analysis can be drastically reduced, which makes the analysis useful for interactive use.
Our analysis is based on determining, for (parts of) functions, which effects their execution can have on the state of a program at runtime. To achieve this, the runtime state of a program is modeled as a graph, which describes the variables and objects in the program's memory and their points-to relationship. The function is executed symbolically, to determine the changes made to the graph or, equivalently, to the runtime state described by it. To determine the effects of executing pieces of code in order, function calls, loops, etc., the change descriptions for smaller parts of the program can be combined in various ways, resulting in descriptions for the behavior of larger parts of the program. The analysis works in a bottom-up fashion, analyzing a called method before analyzing the callee (with recursion being analyzable as well).In 2015 we focused on improving the algorithms and data structures used for the analysis. We were able to significantly improve the runtime and memory requirements for analyzing a given program. Additionally, the analyzed program may now contain more, and more expressive language features.
In 2016 we focused on improving the algorithms and data structures used for the analysis. We improved both the scalability of the analysis towards large code bases with more than 1 mio. statements, and the incremental analysis, where we re-used the analysis results for unmodified program parts, drastically speeding up the analysis for typical software projects (i.e. with a large code base and small, incremental changes)
In 2017 we continued improving the algorithms and data structures used for the analysis. In addition to further development of the analysis' scalability towards large code bases, and of incremental analysis (where we re-used the analysis results for unmodified program parts), we focused on an easy to grasp documentation of the analysis, in order to understand it, and to lay theoretical basics to verify the analysis' correctness.
-
Inter-Thread Testing
(Own Funds)
Term: 01.01.2012 - 31.12.2013In order to achieve higher computing performance, microprocessor manufacturers do not try to achieve faster clock speed anymore - on the contrary: the absolute number of cycles has even decreased, while the number of independent processing units (cores) per processor is continually increased. Due to this evolution, developers must learn to think outside the box: The only way to make their applications faster (in terms of efficiency) is to modularize their programs such that independent sections of code execute concurrently. Unfortunately, present-day systems have reached a level of functional complexity, such that even software for sequential execution is significantly error-prone - the parallelization for multiple cores adds yet another dimension to the non-functional complexity. Although research in the field of software engineering emerged several different quality assurance measures, there are still very few effective methods for testing concurrent applications, as the broad emergence of multi-core systems is relatively young.
This project aims to fill that gap by providing an automated test system. First of all, a testing criteria hierarchy is needed, which provides different coverage metrics tightly tailored to the concept of concurrency. Whilst for example branch coverage for sequential programs requires the execution of each program branch during the test (i.e. making the condition of an if-statements both true and false - even if there is no explicit else branch), a thorough test completion criterion for concurrent applications must demand for the systematic execution of all relevant thread interleavings (i.e. all possibly occurring orderings of statements, where two threads may modify a shared memory area). A testing criterion defines the properties of the 'final' test set only, but does not provide any support for identifying individual test cases. In contrast to testing sequentially executed code, test scenarios for parallel applications must also comprise control information for deterministically steering the execution of the TUT (Threads Under Test).
In 2012, a framework for Java has been developed, which automatically generates such control structures for TUT. The tester must provide the bytecode of his application only; further details such as source code or restrictions of the test scenario selection are optional. The approach uses aspect-oriented programming techniques to enclose memory access statements (reads or writes of variables, responsible for typical race conditions) with automatically generated advices. After weaving the aspects into the SUT (System Under Test), variable accesses are intercepted at runtime, the execution of the corresponding thread is halted until the desired test scenario is reached, and the conflicting threads are reactivated in the order imposed by the given test scenario. In order to demonstrate the functionality, some naive sequence control strategies were implemented, e.g. alternately granting access to shared variables from different threads.
In 2013, the prototypical implementation of the InThreaT framework has been reengineered as an Eclipse plugin. This way, our approach can also be applied in a multi-project environment and the required functionality integrates seamlessly into the development environment (IDE) familiar to programmers and testers. In addition, the configuration effort required from the tester could be reduced, as e.g. selecting and persisting the interesting points of interleaving also became intuitively usable parts of the IDE. In order to automatically explore all relevant interleavings, we need an infrastructure to enrich functional test cases with control information, required to systematically (re)execute individual test cases. In 2013, such an approach for JUnit has been evaluated and implemented prototypically, which allows to mark individual test cases or whole test classes with adequate annotations.
-
Embedded Realtime Language Development Framework
(Own Funds)
Term: 01.01.2012 - 30.11.2014ErLaDeF is our test-bed for new programming language and compiler techniques. Our main focus is on building infrastructure for easier (hard + soft) real-time embedded parallel systems programming.
We focus on hard real-time embedded systems as they are about to go massively parallel in the near future.
Real-time and embedded systems also have hard constraints on resource usage. For example, a task should complete in a fixed amount of time, have guaranteed upper-limits on the amount of memory used, etc. We are developing different ways to manage this concurrency using a combination of strategies: simpler language features, automatic parallelization, libraries of parallel programming patterns, deep compiler analysis, model checking, and making compiler analysis fast enough for interactive use.Runtime Parallelization of Programs
Our automatic parallelization efforts are currently focused on dynamic parallelization. While a program is running, it is analyzed to find loops where parallelization can help performance. Our current idea is to run long-running loops three times. The first two runs analyze the memory accesses of the loop and can both run in parallel. The first run stores in a shared data structure for every memory address in which loop iteration a write access happens. We do not need any synchronization for this data structure, only the guarantee that one value is written to memory, when two concurrent writes happen. In the second pass we check for every memory access, if it has a dependency to one of the stored write accesses. A write access is part of any data-dependency, so we can find all types of data dependencies. If we do not find any, the loop is actually run in parallel. If we find dependencies, the loop is executed sequentially. We can execute the analyses in parallel to a modified sequential execution of the first loop iterations.
In 2013 we have explored alternatives to polymorphism and inheritance that may be easier to analyze. We have also examined alternative thread synchronization methods, for example transactional memory, implicit synchronization, remote procedure calls, etc.
In 2014 we enhanced the analysis so that a loop can start running while the remainder of the loop is analyzed to see if it can be run in parallel. To allow the sequential loop to execute while the tail of the loop is analyzed we needed to instrument the sequential loop slightly. The result is that the loop runs only slightly slower if the loop cannot be parallelized, but if the loop is found to be parallelizable, speedup is near to linear.
Finally, we also created a new language that uses the above library for run-time parallelization. Any loops that the programmer marked as candidates for run-time parallelization are analyzed for constructs that the library cannot yet handle. If the loop is clean, code is generated that uses the library's macros.
Design Patterns for Parallel Programming
A library of parallel programming patterns allows a programmer to select well known parallelization and inter-core communication strategies from a well-debugged library. We are performing research into what (communication) patterns actually exist and when they can be applied. We have collected over 30 different patterns for parallel communication. In 2013 we investigated mechanisms to automatically determine the fitting implementation for a given software and hardware environment. We also added a set of distributed channels where cores can send data from one local memory to another. The
distributed channels allow the library to be used to program modern Network-on-Chip (NoC) processors.Script based language for embedded systems (Pylon)
Pylon is a language that is close to scripting languages (but is statically typed). A large part of the complexity that a programmer would normally take care of when creating an application, is moved to the compiler (i.e., type inference). The programmer does not have to think about types at all. By analyzing the expressions in the program, types are inferred (duck typing). The language is also implicitly parallel, the programmer does not need to have expert knowledge to parallelize an application. The compiler automatically decides what to run in parallel. Finally, the language is kept simple so that learning the language remains easy for novice programmers. For example, we kept the number of keywords small.
Any language constructs that make analyzing the program hard for a compiler has been omitted (pointer arithmetic, inheritance, etc.). Any removed features have been replaced by simpler variants that can be easily analyzed. The current focus of this project lies in supporting the programmer in designing the code. The previous programming language research results have been absorbed in the Pylon project. For example, the prior research results on alternatives to polymorphism and inheritance have been added to the Pylon project. This allows us to report errors at compile time where other languages can only find them at run-time.Interactive Program Analysis
To ensure that program design errors are caught early in the development cycle, it is necessary to find bugs while editing. This requires that any program analysis works at interactive speeds. We are following two approaches to this. The first approach centers around algorithmic changes to program analysis problems. Making analysis problems lazy means that only those parts of a program should be examined that are pertinent to the question that is currently asked by the compiler. For example, if the compiler needs to know which functions access a certain object, it should not examine unrelated functions, classes, or packages. Making program analysis incremental means that a small change in the program should only require small work for the (re-)analysis. To achieve that, a program is split recursively into parts. Then, for each of the parts, it
is calculated which effect it would have during an execution of the program. For each part, a symbolic representation of its effects are saved. These representations can then for one be used to find the errors that occur when two of the parts interact (concurrently or non-concurrently). Also, we can deduce the effects that a bigger part of the program has during it’s execution by combining the effects of the smaller parts the bigger part consists of. This enables incremental analysis, because changes in one place do not cause the whole program to be reanalyzed, as the symbolic representations of the effects of unchanged parts of the program stay unchanged as well.In 2013 our key focuses were twofold: Firstly, we developed data structures that can both precisely and efficiently describe the effects of a part of a program. Secondly, we developed both efficient and precise algorithms to create and use these data structures.
In 2014 we expanded and modified this analysis framework in order to support big code bases, to analyze them and keep the analysis results for later use. This enables us to precisely analyze programs that use libraries, by first analyzing the library, and then using the library{'}s analysis result to get precise analysis results for our program.
Our second approach to bring compiler analysis to interactive speed is to make the analysis itself parallel. In 2013 we continued to develop data-parallel formulations of basic compiler analyses. We have started to implement a generic data-parallel predicate propagation framework. Its data-parallel forms are then portably executable on many different multi-core architectures.
Object-oriented languages offer the possibility to dynamically allcoate objects. The memory required for this is allocated at run-tie. However, in contrast to desktop systems, embedded systems typically have very little memory. If the 'new' operator is used often in an embedded system (that are now starting to be programmed with higher-level languages such as Java and C++ that include 'new'), memory can be exhausted at run-time causing an embedded system to crash.
In 2014, we created an analysis to find this problem at compile-time and report this to the developer.
To detect memory exhaustion at compile-time, the analysis determines the live-time of references to objects. If there are no more references to an object, the object can be removed from memory. Normally, such reference counting schemes are performed at run-time, we, however, perform reference counting at compile-time in an interactive fashion. The result is that memory management errors can be found at compile-time instead of at run-time. Additionally, the static reference counting increases a program{'}s performance as the reference counts do not have to be manipulated at run-time.If it is statically determined that an object can be removed, the developer needs to insert a 'delete' statement. With explicit memory management, we are now able to statically determine a program's worst-case memory requirements. The whole analysis outlined above is integrated into Pylon and a predicate propagation framework previously reported on. Note that the analysis is language independent, however, and can be applied to other languages as well (Java, C++, etc.). However, we can in that case no guarantee that reference counts are correctly computed as we require Pylon{'}s analyzability for this.
The ErLaDeF project is a contribution of the Chair of Computer Science 2 (Programming Systems) to the IZ ESI (Embedded Systems Initiative, http://www.esi-anwendungszentrum.de/)
-
Compiler-supported parallelization for multi-core architectures
(Own Funds)
Term: 01.01.2011 - 31.12.2016Several issues significantly retard the development of quicker and more efficient computer architectures. Traditional technologies can no longer contribute to offer more hardware speed. Basic problems are the divergent ratio of the latencies of memory access and CPU speeds as well as the heat and waste of energy caused by increasing clock rates. Homogeneous and heterogeneous multi-core and many-core architectures were presented as a possible answer and offer enormous performance to the programmer. The multi-level cache hierarchy and decreased clock rates help avoid most of the above problems. Potentially, performance can increase even further by specialization of some hardware components. Current target architectures are GPUs with hundreds of arithmetic units and the Intel XeonPhi processor that provides 60 and more cores including hyper threading on a single board.
While data parallel problems can be relatively simple accelerated by using the new hardware architectures, the implementation of task parallel problems is our main research focus. The difficulty is often the irregularity of the resulting task tree and thus the different task run times. From the point of view of a programming systems research group, there are - among others - the following open questions: Which core executes which work packet in which order? When do you donate a work packet from one compute node to another? Which data belongs to a work packet, are multiple cores/compute nodes allowed to access the data simultaneously? How do we have to merge data from multiple compute nodes? How can a compiler together with a runtime system create tasks and distribute work packets?
In 2011, we have implemented and extended the Cilk programming model for the heterogeneous CellBE architecture (one PowerPC core (PPU) with eight SPU "coprocessors"). The CellBE architecture offers an enormous computing potential on a single chip. To move a work packet in the heterogeneous architecture, we have extended the Cilk programming model by an extra keyword. A source to source transformation then creates code for both, the PPU and SPU cores. Furthermore, we have moved the data along with the work packets in the SPU local stores and used a garbage collection technique to free memory from remote SPUs later.
In 2012, we focused on graphic cards (GPUs) as a second target architecture. GPUs offer a lot more performance than ordinary CPUs, however achieving peak performance may be difficult. For data parallel problems, the performance can be achieved using Cuda (NVidia) or OpenCL (AMD) relatively easy. However, it is much more difficult to port task parallel problems with reasonable performance to the GPU, which is one of the goals on our roadmap. Thus, we design, implement and compare various load balancing algorithms. In 2012 we designed a first approach with hierarchical queues under the principle of work donation.
In 2013, while further developing the load balancing algorithms for the GPU, we also targeted our work towards the Intel XeonPhi processor. With its many-core architecture and large register sets (and thus the ability to issue vector instructions on multiple data), the XeonPhi processor is a new challenge for load balancing algorithms. In practice, we extended and adopted Cilk for the XeonPhi such that we can automatically merge functions during the source-to-source transformation. This increases the Intel compiler{'}s chances to automatically parallelize. We implemented several analyses that not only increase the number of candidate functions for merging but also avoid (or at least handle) divergence in those merged functions.
In 2014, we have extended our existing implementation for XeonPhi processors in a way that we can distribute the work over multiple XeonPhi processors. In contrast to the technique of work stealing that is used to distribute work over the many cores of a single XeonPhi, we use work donation to distribute the work to other XeonPhi processors. With a new source code annotation it is possible to mark the necessary data ranges for a work packet. These data ranges are then distributed along with the work packet and merged at synchronization points, which was the main challenge of the implementation. Furthermore, we have started to extend the clang compiler of the LLVM framework with support for Cilk in order to automatically generate CUDA code for execution on GPUs. Along with the generated CUDA code, we have designed a lightweight but general runtime system that manages execution and execution order of the work packets. We plan to implement analyses to avoid execution divergence as much as possible.
In 2015, we evaluated and compared multiple load balancing algorithms to execute Cilk programs on the GPU. Therefore, we implemented queuing algorithms for parallel access and improved the automated generation of the necessary CUDA code. The correct placement of Cilk keywords for synchronization is still a challenge for the programmer. Thus, we generate from plain, recursive C code multiple, "plausible" code variants including synchronisation statements. These code variants will then be executed speculatively and the result from the fastest, correct variant will be used for further computations. Furthermore, the size of the base case of the recursion is crucial for an optimal performance improvement. Consequently, we started to optimize the size of the base case using analysis during compile and run time and will replace recursive calls with function inlining and vectorisation.
-
Automatic Code Parallelization at Runtime
(Own Funds)
Term: 01.01.2011 - 30.04.2016Our automatic parallelization efforts are currently focused on dynamic parallelization. While a program is running, it is analyzed to find loops where parallelization can help performance. Our current idea is to run long-running loops three times. The first two runs analyze the memory accesses of the loop and can both run in parallel. The first run stores in a shared data structure for every memory address in which loop iteration a write access happens. We do not need any synchronization for this data structure, only the guarantee that one value is written to memory, when two concurrent writes happen. In the second pass we check for every memory access, if it has a dependency to one of the stored write accesses. A write access is part of any data-dependency, so we can find all types of data dependencies. If we do not find any, the loop is actually run in parallel. If we find dependencies, the loop is executed sequentially. We can execute the analyses in parallel to a modified sequential execution of the first loop iterations.
In 2014 we enhanced the analysis so that a loop can start running while the remainder of the loop is analyzed to see if it can be run in parallel. To allow the sequential loop to execute while the tail of the loop is analyzed we needed to instrument the sequential loop slightly. The result is that the loop runs only slightly slower if the loop cannot be parallelized, but if the loop is found to be parallelizable, speedup is near to linear.
Finally, we also created a new language that uses the above library for run-time parallelization. Any loops that the programmer marked as candidates for run-time parallelization are analyzed for constructs that the library cannot yet handle. If the loop is clean, code is generated that uses the library's macros.
The project is a contribution of the Chair of Computer Science 2 (Programming Systems) to the IZ ESI (Embedded Systems Initiative, http://www.esi-anwendungszentrum.de/)
-
Efficient Software Architectures for Distributed Event Processing Systems
(Third Party Funds Single)
Term: 15.11.2010 - 31.12.2015
Funding source: Fraunhofer-GesellschaftLocalization Systems (also known as Realtime Location Systems, or RTLS) become more and more popular in industry sectors such as logistics, automation, and many more. These systems provide valuable information about whereabouts of objects at runtime. Therefore, processes can be traced, analyzed, and optimized. Besides the research activities at the core of localization systems (like resilience and interference-free location technologies or methods for highly accurate positioning), algorithms and techniques emerge that identify meaningful information for further processing steps. Our research focuses on automatic configuration methods for RTLSs as well as on the generation of dynamic motion models and techniques for event processing on position streams at runtime.
In 2011, we investigated whether events can be predicted after analyzing and learning event streams from the localization system at runtime. As a result, we are able to deduce models that represent the information buried in the event stream to predict future events.
We developed several methods and techniques in 2012 that process and detect events with low latency. Events (composite, complex) can be detected by means of a hierarchical aggregation of sub-events that themselves are detected by (several) event detectors processing sub-information in the event stream. This greatly reduces the complexity of the detection components and renders them fully maintainable. They even can use parallel or distributed cluster architectures more efficiently so that important events can be detected within a few milliseconds.
In 2013 we further minimized detection latency in distributed event-based systems: first, a new migration technique modifies and optimizes the allocation of software components in a networked environment at runtime to minimize networking overhead and detection latencies. Second, a speculative event processing technique uses conservative buffering techniques to exploit available system resources. We also created and published a representative data set (consisting of realtime position data and event streams) and a corresponding task description.
In 2014 we investigated fundamental approaches zu handle uncertainties (both w.r.t. the definition of event detectors and to the events). We implemented a promising prototype of an event-based system that is no longer deterministic but instead evaluates several possible system states in parallel to achieve a detection with a much higher robustness and correctness. The domain expert can parameterize the event detectors by attaching probabilities or probability functions to the generated events.
In 2015 we improved, optimized and published our approach. Furthermore we started to investigate approaches to learn optimal parameter sets for the event detectors. Thus, a manual adjustment and tuning of parameters (like thresholds) becomes unnecessary.
The project is a contribution of the Programming Systems Group to the IZ ESI http://www.esi.uni-erlangen.de/
-
Analysis of Code Repositories
(Own Funds)
Term: 01.01.2010 - 12.04.2024
URL: https://www2.cs.fau.de/research/AnaCoRe/Software developers often modify their projects in a similar or repetitive way. The reasons for these changes include the adoption of a changed interface to a library, the correction of mistakes in functionally similar components, or the parallelization of sequential parts of a program. If developers have to perform the necessary changes on their own, the modifications can easily introduce errors, for example due to a missed change location. Therefore, an automatic technique is desireable that identifies similar changes and uses this knowledge to support developers with further modifications.Extraction of Code-Changes
In 2017, we developed a new code recommendation tool called ARES (Accurate REcommendation System). It creates more accurate recommendation compared to previous tools as its algorithms take care of code movements during pattern and recommendation creation. The foundation of ARES lies in the comparison of two versions of the same program. It extracts the changes between the two versions and creates patterns based on the changed methods. ARES uses these patterns to suggest similar changes for the source code of different programs automatically.
The extraction of code changes is based on trees. In 2016 we developed (and visibly published) a new tree-based algorithm (MTDIFF) that improves the accuracy of the change extraction.Symbolic Execution of Code-Fragments
In 2014 we developed a new symbolic code execution engine called SYFEX to determine the behavioral similarity of two code fragments. In this way we aim to improve the quality of the recommendations. Depending on the number and the generality of the patterns in the database, it is possible that without the new engine SIFE generates some unfitting recommendations. To present only the fitting recommendations to the developers, we compare the summary of the semantics/behavior of the recommendation with summary of the semantics/behavior of the database pattern. If both differ too severely, our tool drops the recommendation from the results. The distinctive features of SYFEX are its applicability to isolated code fragments and its automatic configuration that does not require any human interaction.
In 2015 SYFEX was refined and applied to code fragments from the repositories of different software projects. In 2016 we investigated to which extend SYFEX can be used to gauge the semantic similarity of submissions for a programming contest. In 2017 and 2018 we optimized the implementation of SYFEX. We also began collecting a data set of semantically similar methods from open source repositories. We published this data set in 2019.Techniques for symbolic execution use algorithms to check the satisfiability of logical/mathematical expressions in order to detect valid execution paths in a program. Usually, these algorithms account for a large part of the total runtime of a symbolic execution. To accelerate this satisfiability check, we experimented with a technique to replace complicated expressions with simpler equivalent expressions. These simpler expressions are obtained by using program synthesis. In the year 2020, we extended this program synthesis with a novel technique that can quickly detect whether a fixed set of operations can be used to construct an expression that is equivalent to the complicated expression. We published this approach in 2021 and were able to show that the technique can reduce the runtime of common program synthesizers by 33% on average. We subsequently extended this technique to other classes of program synthesis problems. In 2022, we performed a comprehensive evaluation of these extensions. This evaluation showed that these extensions similarly improve the runtime of program synthesizers on a larger class of program synthesis problems. We completed the work on unrealizability detectors for bit vector program synthesis in 2023 and described it in detail in a Dissertation.
Detection of Semantically Similar Code Fragments
SYFEX computes the semantic similarity of two code fragments. Therefore, it allows to identify pairs or groups of semantically similar code fragments (semantic clones). However, the high runtime of SYFEX (and similar tools) limit their applicability to larger software projects. In 2016, we started the development of a technique to accelerate the detection of semantically similar code fragments. The technique is based on so-called base comparators that compare two code fragments using a single criterion (e.g., the number of used control structures or the structure of the control flow graph) and that have a low runtime. These base comparators can be combined to form a hierarchy of comparators. To compute the semantic similarity of two code fragments as accurately as possible, we use genetic programming to search for hierarchies that approximate the similarity values as reported by SYFEX for a number of pairs of code fragments. A prototype implementation confirmed that the method is capable of detecting pairs of semantically similar code fragments.
We further improved the implementation of this approach in 2017 and 2018. Additionally, we focused on evaluating the approach with pairs of methods from software repositories and from programming exercises. Moreover, we created a data set of semantically similar methods from open-source software repositories that we published in 2019.
Techniques for symbolic execution rely on algorithms to detect the satisfiability of logic/mathematic expressions. These are used to detect whether an execution path in a program is feasible. The algorithms often use a large amount of the total computation time. To improve the speed of this satisfiability check, in the years 2019 and 2020 we experimented with a technique to replace complicated expressions with simpler expressions that have the same meaning. These simpler expressions result from the application of program synthesis. In 2020 we augmented the program synthesis with a novel approach to detect beforehand if some operations can form an expression with the same meaning as a more complicated expression.
Semantic Code Search
The functionality that has to be implemented during the development of a software product is often already available as part of program libraries. It is often advisable to reuse such an implementation instead of rewriting it, for example to reduce the effort for developing and testing the code.
To reuse an implementation that fits the purpose, developers have to find it first. To this end developers already use code search engines on a regular basis. State-of-the-art search engines work on a syntactic level, i.e., the user specifies some keywords or names of variables and methods that should be searched for. However, current approaches do not consider the semantics of the code that the user seeks. As a consequence, relevant but syntactically different implementations often remain undetected ("false negatives") or the results include syntactically similar but semantically irrelevant implementations ("false positives"). The search for code fragments on a semantic level is the subject of current research.
In 2017 we began the development of a new method for semantic code search. The user specifies the desired functionality in terms of input/output examples. A function synthesis algorithm from the literature is then used to create a method that implements the specified functionality as accurately as possible. Using our approach to detect similar code fragments, this synthesized method is then compared to the methods of program libraries to find semantically similar implementations. These implementations are then presented as search results to the user. A first evaluation of our prototypical implementation shows the feasibility and practicability of the approach.Clustering of Similar Code-Changes
To create generalized change patterns, it is necessary that the set of extracted code changes is split into subsets of changes that are similar to each other. In 2015 this detection of similar code changes was improved and resulted in a new tool, called C3. We developed and evaluated different metrics for a pairwise similarity comparison of the extracted code changes. Subsequently, we evaluated different clustering algorithms known from the literature and implemented new heuristics to automatically choose the respective parameters to replace the previous naive approach for the detection of similar code changes. This clearly improved the results compared to the previous approach, i.e., C3's new techniques detect more groups of similar changes that can be processed by SIFE to generate recommendations.
The aim of the second improvement is to automatically refine the resulting groups of similar code changes. For this purpose we evaluated several machine learning algorithms for outlier detection to remove those code changes that have been spuriously assigned to a group.
In 2016 we implemented a new similarity metric for the comparison of two code changes that essentially considers the textual difference between the changes (as generated, for example, by the Unix tool 'diff'). We published both a paper on C3 and the dataset (consisting of groups of similar changes) that we generated for the evaluation of our tool under an open-source license, see https://github.com/FAU-Inf2/cthree . This dataset can be used as a reference or as input data for future research. In addition, we prototypically extended C3 by techniques for an incremental similarity computation and clustering. This allows us to reuse results from previous runs and to only perform the absolutely necessary work whenever new code changes are added to a software archive. -
Software Project Control Center
(Third Party Funds Single)
Term: 01.11.2009 - 31.12.2015
Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)Prototypical implementation of a new tool for quality assurance during software development
Modern software systems are growing increasingly complex with respect to functional, technical and organizational aspects. Thus, both the number of requirements per system and the degree of their interconnectivity constantly increase. Furthermore the technical parameters, e.g., for distribution and reliability are getting more complex and software is developed by teams that are not only spread around the globe but also suffer from increasing time pressure. Due to this, the functional, technical, and organizational control of software development projects is getting more difficult.
The "Software Project Control Center" is a tool that helps the project leader, the software architect, the requirements engineer, or the head of development. Its purpose is to make all aspects of the development process transparent and thus to allow for better project control. To achieve transparence, the tool distills and gathers properties from all artifacts and correlations between them. It presents/visualizes this information in a way suitable for the individual needs of the users.
The Software Project Control Center unifies the access to relations between artifacts (traceability) and to their properties (metrics) within software development projects. Thus, their efficiency can be significantly increased. The artifacts, their relations, and related metrics are gathered and integrated in a central data store. This data can be analyzed and visualized, metrics can be computed, and rules can be checked.
For the Software Project Control Center project we cooperate with the QAware GmbH, Munich. The AIF ZIM program of the German Federal Ministry of Economics and Technology funded the first 30 months of the project.
The Software Project Control Center is divided into two subsystems. The integration pipeline gathers traceability data and metrics from a variety of software engineering tools. The analysis core allows to analyze the integrated data in a holistic way. Each subsystem is developed in a separate subproject.
The project partner QAware GmbH implemented the integration pipeline. The first step was to define TraceML, a modeling language for traceability information in conjunction with metrics. The language contains a meta-model and a model library. TraceML allows to define customized traceability models in an efficient way. The integration pipeline is realized using TraceML as lingua franca in all processing steps: From the extraction of traceability information to its transformation and integrated representation. We used the Eclipse Modeling Framework to define the TraceML models on each meta-model level. Furthermore, we used the Modeling Workflow Engine for model transformations and Eclipse CDO as our model repository. A set of wide-spread tools for software engineering are connected to the integration pipeline including Subversion, Eclipse, Jira, Enterprise Architect and Maven.
The main contribution of our group to this project is the analysis core, i.e., the design and implementation of a domain-specific language for graph-based traceability analysis. Our Traceability Query Language (TracQL) significantly reduces the effort that is necessary to implement traceability analyses. This is crucial for both industry and the research community as lack of expressiveness and inefficient runtimes of other known approaches used to hinder the implementation of traceability analysis. TracQL eases not only the extraction, but also the analysis of traceability data using graph traversals that are denoted in a concise functional programming style. The language itself is built on top of Scala, a multi-paradigm programming language, and was successfully applied to several real-world industrial projects.
In 2014, we improved the modularity of the language to make it both more adaptable and extendable in terms of structure and operations. This not only increases its expressiveness but also improves the reusability of existing traceability analyses.
In 2015, we evaluated and documented our approach in order to emphasize its core attributes and to show its effectiveness. The three core attributes are:
- Representation independence: TracQL is adaptable to various data sources at which their data types are available statically typed.
- Modularity: the approach is both modifiable and extendable in terms of structure and operations.
- Applicability: the language has a better expressiveness and performance than other approaches. -
OpenMP/Java
(Third Party Funds Single)
Term: 01.10.2009 - 01.10.2015
Funding source: IndustrieJaMP is an implementation of the well-known OpenMP standard adapted for Java. JaMP allows one to program, for example, a parallel for loop or a barrier without resorting to low-level thread programming. For example:
class Test {
...void foo() {
......//#omp parallel for
......for (int i=0;i .........a[i] = b[i] + c[i]
......}
...}
} is valid JaMP code. JaMP currently supports all of OpenMP 2.0 with partial support for 3.0 features, e.g., the collapse clause. JaMP generates pure Java 1.5 code that runs on every JVM. It also translates parallel for loops to CUDA-enabled graphics cards for extra speed gains. If a particular loop is not CUDA-able, it is translated to a threaded version that uses the cores of a typical multi-core machine. JaMP also supports the use of multiple machines and compute accelerators to solve a single problem. This is achieved by means of two abstraction layers. The lower layer provides abstract compute devices that wrap around the actual CUDA GPUs, OpenCL GPUs, or multicore CPUs, wherever they might be in a cluster. The upper layer provides partitioned and replicated arrays. A partitioned array automatically partitions itself over the abstract compute devices and takes the individual accelerator speeds into account to achieve an equitable distribution. The JaMP compiler applies code-analysis to decide which type of abstract array to use for a specific Java array in the user’s program.
In 2010, the JaMP environment was extended to support the use of multiple machines and compute accelerators to solve a single problem. We have developed two new abstraction layers. The lower layer provides abstract compute devices which wrap around the actual CUDA GPUs, OpenCL GPUs, or multicore CPUs, wherever they might be in a cluster. The upper provides partitioned and replicated arrays. A partitioned array automatically partitions itself over the abstract compute devices and takes the individual accelerator speeds into account to achieve an equitable distribution. The JaMP compiler applies code-analysis to decide which type of abstract array to use for a specific Java array in the user’s program.
In 2012, we extended the JaMP framework to also handle Java objects on multiple ma- chines and accelerators (and not just arrays of primitive types). We added two different ways to handle objects. Standard shared objects are replicated on all compute devices. Arrays of objects are now also replicated or partitioned over the different devices. To increase the performance of the program, the framework has to break with Java’s semantics. Java’s object structure is mapped to a flat memory structure for the execution on the different devices.
In 2013, we examined how to better support Java objects in OpenMP parallel code, regardless of where the code is executed. We found that we needed to restrict the language slightly by forbidding inheritance of objects used in a parallel block. This ensures that the objects will not be of a different type than what is seen at compile time. We use this property to, for example, allow object inlining into arrays to occur naturally. With the added inlining, communication of objects and arrays over the network and to the compute devices was accelerated enormously, including a small performance increase on the devices themselves.
In 2014 we developed a JaMP implementation for Android 4.0. Currently this version only supports the SIMD construct of OpenMP.
In 2015 we added OpenMP tasks (OpenMP 3.0) to JaMP. This makes it possible to parallelize recursive algorithms with JaMP. -
Parallelization techniques for embedded systems in automation
(Own Funds)
Term: 01.06.2009 - 31.12.2015This project was launched in 2009 to address the refactorization and parallelization of applications used in the field of industry automation. These programs are executed on specially designed embedded systems. This hardware forms an industry standard and is used worldwide. As multicore-architectures are increasingly used in embedded systems, existing sequential software must be parallelized for these new architectures in order to improve performance. As these programs are typically used in the industrial domain for the control of processes and factory automation they have a long life cycle. Because of this, the programs often are not being maintained by their original developers any more. Besides that, a lot of effort was spent to guarantee that the programs work reliably. For these reasons the software is only extended in a very reluctant way.
Therefore, a migration of these legacy applications to new hardware and a parallelization cannot be done manually, as it is too error prone. Thus, we need tools that perform these tasks automatically or aid the developer with the migration and parallelization.
Research on parallelization techniques
We developed a special compiler for the parallelization of existing automation programs. First, we examined automation applications with respect to automatic parallelizability. We found that it is hard to perform an efficient automatic parallelization with existing techniques. Therefore this part of the project focuses on two steps to handle this situation. As first step, we developed a data dependence analysis that identifies potential critical sections in a parallel program, presents them to the programmer and adds their protection to the code. We ware able to show that our approach to identify critical sections finds atomic blocks that closely match the atomic blocks that an expert would add to the code. Besides that, we showed in 2014 that the impacts on execution times are negligible if our technique finds atomic blocks that are larger than necessary or that are not necessary at all.
As second step we have refined and enhanced existing techniques (software transactional memory (STM) and lock inference) to implement atomic blocks. In our approach, an atomic block uses STM as long as lock inference would lead to coarse-grained synchronization. The atomic block switches from STM to lock inference as soon as a fine-grained synchronization is possible. With this technique, an atomic block always uses fine-grained synchronization while the runtime overhead of STM is minimized at the same time. We showed that (compared to a pure STM or lock inference implementation) our technique speeds up execution times by a factor between 1.1 and 6.3. Although fine-grained synchronization in general leads to better performance than a coarse-grained solution, there are cases where a coarse-grained implementation shows equal performance. We therefore presented a runtime mechanism for an STM that also works together with our combined technique. This runtime mechanism starts with a small number of locks, i.e., a coarse-grained locking, where accesses to different shared variables are protected by the same lock. If this coarse-grained locking leads to too many non-conflicting accesses waiting for the same lock, our approach gradually increases the number of locks. This makes the locking more fine-grained so that non-conflicting accesses can be executed concurrently. Our runtime mechanism that dynamically tunes the locking-granularity makes the programs run up to 3.0 times faster than a fixed coarsegrained synchronization.
We completed this project part in 2014.
Research on migration techniques
Our research on the migration of legacy applications originally consisted of having a tool that automatically replaces suboptimal code constructs with better code. The code sequences that had to be replaced as well as the replacement codes were specified by developers by means of a newly developed pattern description language. However, we found this approach to be too difficult for novice developers.
This led us to the development of a new tool that automatically learns and generalizes patterns from source code archives, recognizes them in other projects, and presents recommendations to developers. The foundation of our tool lies in the comparison of two versions of the same program. It extracts the changes that were made between two source codes, derives generalized patterns of suboptimal and better code from these changes, and saves the patterns in a database. Our tool then uses these patterns to suggest similar changes for the source code of different programs.
In 2014 we developed a new symbolic code execution engine to minimize the number of wrong recommendations. Depending on the number and the generality of the patterns in the database, it is possible that without the new engine our tool generates some unfitting recommendations. To discard the unfitting ones, we compare the summary of the semantics/behavior of the recommendation with summary of the semantics/behavior of the database pattern. If both differ too severely, our tool drops the recommendation from the results. The distinctive features of our approach are its applicability to isolated code fragments and its automatic configuration that does not require any human interaction.
The latest results of our tool SIFE are found online (last update: 2014-05-09).
Parts of the project are funded by the "ESI-Anwendungszentrum" [http://www.esi-anwendungszentrum.de/]
-
Integrated Tool Chain for Meta-model-based Process Modeling and Execution
(Third Party Funds Single)
Term: 01.10.2008 - 31.12.2012
Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)As demands on the development of complex software systems are continuously increasing, compliance with well-defined software development processes becomes even more important. Especially large and globally distributed software development projects tend to require long-running and dynamically changeable processes spanning multiple organizations. In order to describe and support such processes, there is a strong need for suitable process modeling languages and for powerful support by tools. The results of a preceding cooperation project show that today's tools markets lack integrated tool chains which actually support the fine-grained and precise modeling of software development processes as well as their computer-aided execution, controlling and monitoring. A cooperation project has bridged this gap. This cooperation project was carried out together with develop group as an industrial partner and was funded by BMWi. It started in October 2008 and has been scheduled for three researchers. The project was finished in September 2011. The objective of this cooperation project was to prototype an integrated tool chain by using a rigorous, meta-model based approach that supports modeling, enactment, and execution of industrial software development processes. Bearing the applicability of such a tool in mind, this approach was mainly intended to provide a wide adaptability of process models to different industrial development scenarios, to define a user-friendly concept of process description and to establish an extensive computer-aided process execution support, contributing to the efficiency of development activities. These benefits were achieved by a high grade of formalism, by an integrated, generic concept of process modeling and process enactment and by using commonly accepted industrial standards (UML, SPEM). The integrated tool chain developed in this project is based on an extension of the SPEM standard (eSPEM - enactable SPEM). eSPEM adds a behaviour modeling concept by reusing UML activity and state machine diagrams. In addition, eSPEM provides behaviour modeling concepts that are specific to software development processes, for example, dynamic task creation and scheduling. In 2012, an overview of the tool chain and eSPEM has been presented at the "First Workshop on Academics Modeling with Eclipse" which was held in conjunction with the "8th European Conference on Modeling Foundations and Applications". In addition, practical experiences from modeling SDPs in industrial projects have shown a rising importance of standards and reference models which are subsequently summarized under the term quality standard. These quality standards are used to specify requirements for target-oriented and effective execution of software development projects. These requirements are thereby defined to address different goals related to e.g. quality and efficiency (Automotive SPICE, CMMI) or safety (ISO 26262 Road Vehicles - Functional Safety) aspects of SWDPMs (Software Development Process Models). In other words, these requirements - often described in terms of best practices - are imposed on the software process definition that is typically described by SWDPMs. Tracing these requirements to the process definition is a precondition for supporting efficient assessment activities and process improvement projects. An additional goal of this research project lies therefore in the integration of these quality standards with SWDPMs with a special focus on environments that requires conformance to more than one quality standard (e.g. CMMI, Automotive SPICE and ISO 26262). -
Wireless Localization
(Third Party Funds Single)
Term: 01.05.2008 - 14.11.2013
Funding source: Fraunhofer-GesellschaftLocalization Systems (also known as Realtime Location Systems, or RTLS) become more and more popular in industry sectors such as logistics, automation and many more. These systems provide valueable information about whereabouts from objects at runtime. Therefore, processes can be traced, analyzed and optimized. Besides the research activities at the core of localization systems (like resilience and interference-free location technologies or methods for highly accurate positioning), there emerge algorithms and techniques to identify meaningful information for further processing steps. In this context, the aim of the wireless localization project is the research on automatic configuration methods for RTLSs as well as the generation of dynamic moving models and techniques for event processing on position streams at runtime. In 2009, we continued the development of our algorithms to estimate the receiving antenna's position (pose) of location systems. The algorithms estimate measuring points which allow a fast and accurate measurement pose. We applied an robot to execute the measurement automatically. The developed algorithm considers obstacles and the receiving characteristics of the location system and can sort out errors contained in the measurement data (e.g. multipath measurements). In 2010, models have been developed that can be used as dynamic moving models. Learning methods are applied to adapt the models at run-time. A formal language called TBL (Trajectory Behavior Language) was developed for describing trajectories. Additional algorithms can shrink the size of that description and hence compress the data size required to store trajectories. We are evaluating methods for learning the moving models online. These are evaluated in a study with respect to motion prediction. Moreover, it is being investigated whether events can be predicted by analyzing and learning event streams from the localisation system at runtime. -
Model Driven Component Composition
(Third Party Funds Single)
Term: 15.06.2007 - 31.12.2011
Funding source: IndustrieThis project is part of the INI.FAU collaboration between AUDI AG and the University of Erlangen-Nuremberg. It examines model-driven ways to integrate vehicle functions on electronic control units (ECUs). Moreover, the project develops supporting methods and tools for this task. The insights gained in the course of this project will be practically validated by integrating a damper control system into an AUTOSAR ECU. In the automotive industry it is common practice to develop in-car-software on a high level of abstraction and in a model-based way. To eliminate uncertainties concerning resource consumption and runtime it is necessary to test the developed software on the target hardware as early as possible. But due to cost and safety requirements the integration of the software into an ECU is very time-consuming and demands special expertise going beyond that of the function developer. AUTOSAR (AUTomotive Open Systen ARchitecture) is on the way to become a standard for the basic software on ECUs. But due to the novelty of this standard there are neither processes nor tools to support the integration of the developed in-car-software into an ECU. In 2008, we have examined the modeling expressiveness of AUTOSAR with respect to both its applicability and possible conflicts with existing standards and technologies that are currently in use at Audi. Furthermore, the automatic generation of an AUTOSAR software architecture from a single damper control component has been implemented. Since 2009, a model-driven approach that supports the integration of software into an ECU is being implemented and integrated into the tool set used at Audi. In particular we are looking at the automatic configuration of the bus communication by means of a bus database and the automatic task scheduling among the application processes. The model-driven development, which in this case is based on the Eclipse Modeling Framework, will enable easier tayloring of the emerging prototype to changing requirements. In 2010, we exploited the information that is available in an AUTOSAR project to automatically configure local and remote communication between software components. We have also developed a genetic algorithm that uses dependency information to automatically generate task schedulings that minimize communcation latencies between cooperating software components. The existing prototype has been extended with the above-mentioned methods. -
JavaParty
(Own Funds)
Term: 01.04.2007 - 31.12.2010JavaParty [http://svn.ipd.uni-karlsruhe.de/trac/javaparty/wiki/JavaParty] allows easy porting of multi-threaded Java programs to distributed environments such as e.g. clusters. Regular Java already supports parallel applications with threads and synchronization mechanisms. While multi-threaded Java programs are limited to a single address space, JavaParty extends the capabilities of Java to distributed computing environments.
The normal way of porting a parallel application to a distributed environment is the use of a communication library. Java's remote method invocation (RMI) renders the implementation of communication protocols unnecessary, but still leads to increased program complexity. The reasons for increased complexity are the limited RMI capabilities and the additional functionality that has to be implemented for creation and access of remote objects.
The JavaParty approach is different. JavaParty classes can be declared as remote. While regular Java classes are limited to a single virtual machine, remote classes and their instances are visible and accessible anywhere in the distributed JavaParty environment. As far as remote classes are concerned, the JavaParty environment can be viewed as a Java virtual machine that is distributed over several computers. Access and creation of remote classes are syntactically indistinguishable from regular Java classes.In 2007/08, a complete new version of the JavaParty compiler was implemented. This version supports the current Java Standard 1.5/1.6. The implementation is based on the open and freely available Eclipse compiler framework. Thus, future developments of the Java language and corresponding extensions for the Eclipse compiler will automatically become available for JavaParty as well.
In 2009, the JavaParty compiler was extended by a semantics for inner classes.
We have reached the following goals in 2010:
- Most of the previously self-implemented structures of the run-time system were replaced with more efficient standard Java implementations, because of matters of stability.
- Due to compatibility and security reasons, the communication layer (KaRMI) was reimplemented based on Java's current socket technology.
- A new "near" context was added to remote classes. With this new context that provides host local memory for each instance, locality-aware algorithms can be expressed. -
ParSeMiS - the Parallel and Sequential Graph Mining Suite
(Own Funds)
Term: 01.05.2006 - 31.12.2010The ParSeMiS project (Parallel and Sequential Graph Mining Suite) searches for frequent, interesting substructures in graph databases. This task is becoming increasingly popular because science and commerce need to detect, store, and process complex relations in huge graph structures.
For huge data that cannot be worked on manually, algorithms are needed that detect interesting correlations. Since in general the problem is NP-hard and requires huge amounts of computation time and memory, parallel or specialized algorithms and heuristics are required that can perform the search within time boundaries and memory limits.
Our target is to provide an efficient and flexible tool for searching in arbitrary graph data, to improve the adaption to new application areas, and to simplify and unify the design of new mining algorithms.Aufbauend auf den Ergebnissen des Projekts ParMol2 wurden 2006/2007 folgende Ziele erreicht:
- Restrukturierung und Neudesign der gewachsenen ParMol-Strukturen zu einer flexiblen Graphbibiliothek.
- Ergänzung des objekt-orientierten Graphdesigns zu kompakteren, zur Parallelisierung besser geeigneten Datenstrukturen.
- Überführung und Zerlegung der Algorithmen gSpan und Gaston in die neuen Strukturen und Einbau von Erweiterungen für das aktuelle Anwendungsgebiet ”Prozedurale Abstraktion”.
- Entwurf und Implementierung eines neuen Algorithmus zur Suche in gerichteten azyklischen Graphen (DAGs) als Spezialisierung für die Prozedurale Abstraktion.
- Implementierung einer angepassten grafischen Anzeige für DAGs.
In 2008, the following goals have been achieved:
- Documentation and publication of the source code to enlarge the user base of the project,
- Implementation of a specialized graph layout for DAGs,
- Restructuring of the graphical user interface, and
- Added support for clusters of multi-core nodes.
In 2009, the following goals have been attacked:
- Optimizations for embedding-based frequency mining: A more detailed look at the pruning-related properties of the maximum-clique-test resulted in a huge run-time improvement, particularly for low frequencies that are of special interest for applications. This is achieved by an early detection during the NP-complete test that decides for a fragment whether can become frequent at all.
- Improved distribution for clusters of multi-core nodes: Co-location of threads in the same memory speeds up parallel search. First results have been seen in 2009, more are expected in 2010.
In 2010, the distributed stack implementations have also been tested on other algorithms and data structures.
-
Tapir
(Own Funds)
Term: 01.01.2006 - 31.12.2010Tapir is a new programming language to ease systems programming. Systems programming includes networking protocols, operating systems, middlewares, DSM systems, etc. Such systems are critical for the functioning of a system as they supply services that are required by user applications. For example, an operating system supplies an operating environment and abstracts from concrete hardware in doing so. A DSM system simulates a single shared memory machine by abstracting from the single machines inside a cluster so that a (user-level application on a) cluster can be programmed without having to program explicit message passing.
Compared to application programming, systems programming has a different set of requirements. The programming 'style' is also very different from the styles used in programming user-level applications. Finally, the performance requirements are usually very strict in systems programs as the complete system's performance greatly relies on the underlying layers of systems programs. Bugs in systems code have also great repercussions on a complete system's reliability. Combined, we can directly conclude the following when using high-level languages for systems programming:
- High-level languages, such as C++, C#, and Java 'hide' implementation details from the programmer. A programmer for example no longer needs to know how exactly a method call is implemented. This knowledge is, however, required when doing systems programming.
- High-level languages supply functionality that is neither required nor wanted. For example, when programming an operating system, automatic language driven exception handling or garbage collection are not wanted.
- Systems programs require no high abstraction levels like common high-level programming languages supply. Likewise, the libraries that a given language offers can simply not be supplied in a systems context. Usually this is due to the system itself supplying the functionality that the library is to provide.The basics of the Tapir language have been created. While Tapir has some similarities to Java, C#, and C++, all unneeded and unwanted functionality of the above have been removed. For example, Tapir has no automatic memory management, no exception handling, and no type-casts. Class and object concepts have been kept, but inheritance has been removed. The resulting Tapir programs can be verified by means of model-checking, even while the programming is still being developed. Verification is made easy as code pieces that are implementation details can be marked as such so that they can be safely ignored by the verifier. Tapir code can be executed in parallel for example also on a graphics card without the possibility of the common programming errors associated with parallelism can occur.
Even while the language is still being developed, a prototype DSM protocol has already been implemented in the Tapir language. We have evaluated RDMA-based DSM protocols so that they can be added to the Tapir language. Tapir's semantic checks are implemented by means of model-checking. Model-checking, however, is a very memory intensive analysis. This made us write our own Java Virtual machine, called LVM, which is especially suited for managing large numbers of objects. LVM outperforms standard Java VMs as soon as swapping becomes necessary.
2006/2007 wurde an den grundlegenden Spracheigenschaften von Tapir gearbeitet. Obwohl Tapir an existierende Hochsprachen wie C++ und Java angelehnt ist, wurden alle unnötigen Eigenschaften und Funktionen entfernt. Beispielsweise fehlen Tapir Speicherbereinigung, Ausnahmebehandlung und Typwandlungen; Klassen und Objekte können zwar definiert werden, jedoch ist keine Vererbungsbeziehung zwischen Klassen erlaubt. Das mit Tapir spezifizierte Systemprogramm kann mit Model Checking-Techniken bereits während der Entwicklung auf Fehler überprüft werden. Ein prototypischer Übersetzer und ein Verifikationswerkzeug wurden 2006/2007 implementiert.
In 2008, LVM was optimized both for sequential execution and for distributed execution on a cluster of workstations. This allows for faster verification of Tapir programs on clusters and for faster running of scientific Java programs.
In 2009, the Tapir language was itself improved to allow both easier automatic program verification and to allow more efficient code to be generated. For example, the language has become easier to verify because code pieces that are implementation details can be marked as such and be safely ignored by the verifier. The efficiency of the language has been improved such that selected parts of programs can now be executed in parallel for example also on a graphics card without the possibility of the common programming errors associated with parallelism can occur.
-
Cluster and Grid computing made easy
(Own Funds)
Term: 01.01.2006 - 31.12.2010Jackal is a project to create a distributed-shared-memory system for Java. This means that you can write a multi-threaded program (that you could run using normal JVMs on single machines as well) and deploy it on a cluster connected by a network. Jackal also sports a nice check-pointer so it can periodically write the program state to disk for fault tolerance.
To make things more interesting, you can write the program also using OpenMP annotations which allows re-parallelization. Combined with checkpointing this allows a program to be restarted on a different number of machines as that it was started with.
The Jackal-DSM is not only suited for traditional clusters where each node containes only a single CPU and core, but also for newer style clusters where each node in the cluster contains a multi-core CPU. Additionally, Jackal also has extensions and tools for Grid computing. -
International Collegiate Programming Contest at the FAU
(Own Funds)
Term: since 01.11.2002
URL: http://www2.informatik.uni-erlangen.de/research/ICPC/Since 1977 the International Collegiate Programming Contest (ICPC) takes place every year. Teams of three students try to solve about 13 programming problems within five hours. What makes this task even harder, is that there is only one computer available per team. The problems demand for solid knowledge of algorithms from all areas of computer science and mathematics, e.g., graphs, combinatorics, strings, algebra, and geometry. To solve the problems, the teams need to find a correct and efficient algorithm and implement it.The ICPC consists of three rounds. First, each participating university hosts a local contest to find the up to three teams that are afterwards competing in one of the various regional contests. Germany lies in the catchment area of the Northwestern European Regional Contest (NWERC) with competing teams from Great Britain, Benelux, Scandinavia, etc. The winners of all regionals in the world (and some second place holders) advance to the world finals in spring of the following year (2023 in Sharm El Sheikh, Egypt).On January 27, 2024, the Winter Contest took place once again. 108 teams from 17 universities participated, including 15 teams from Erlangen. Our best team finished 32nd. On June 22, the German Collegiate Programming Contest was held at several German universities, with 15 teams from Erlangen. The best FAU team secured the 22nd position out of 94 participating teams from all over Germany. The NWERC took place on November 24 in Delft. FAU was represented by one team, which finished in the 19th position among 80 participating teams. As usual, we also conducted the main seminar “Hello World! - Advanced Programming” in 2024.
2024
Multilayer Multipurpose Caches for OpenMP Target Regions on FPGAs
Proceedings of the 20th International Workshop on OpenMP, IWOMP 2024 (Perth, Australia, 23.09.2024 - 25.09.2024)
In: Espinosa, A., Klemm, M., de Supinski, B.R., Cytowski, M., Klinkenberg, J. (ed.): OpenMP: Advancing OpenMP for Future Accelerators, Cham: 2024
DOI: 10.1007/978-3-031-72567-8_6
BibTeX: Download
, , :
Multilayer Multipurpose Caches for OpenMP Target Regions on FPGAs [Data set]
(2024)
DOI: 10.5281/zenodo.12755510
BibTeX: Download
(online publication)
, , :
Employing Polyhedral Methods to Optimize Stencils on FPGAs with Stencil-specific Caches, Data Reuse, and Wide Data Bursts
14th International Workshop on Polyhedral Compilation Techniques, (IMPACT 2024, in conjunction with HiPEAC 2024) (München, 17.01.2024)
DOI: 10.48550/arXiv.2401.13645
URL: https://impact-workshop.org/impact2024/#mayer24-fpgas
BibTeX: Download
, , :
Employing polyhedral methods to optimize stencils on FPGAs with stencil-specific caches, data reuse, and wide data bursts [Reproduction Package]
(2024)
DOI: 10.5281/zenodo.10396084
BibTeX: Download
(online publication)
, , :
Register Expansion and SemaCall: 2 Low-overhead Dynamic Watermarks Suitable for Automation in LLVM
ACM SIGSAC Conference on Computer and Communications Security (CCS'24), Workshop on Offensive and Defensive Techniques in the Context of Man At The End (MATE) attacks (Checkmate ’24) (Salt Lake City, UT, 18.10.2024 - 18.10.2024)
In: CheckMATE '24: Proceedings of the 2024 Research on offensive and defensive techniques in the context of Man At The End (MATE) attacks, New York: 2024
DOI: 10.1145/3689934.3690815
URL: https://dl.acm.org/doi/10.1145/3689934.3690815#
BibTeX: Download
, , :
Register Expansion and SemaCall: 2 low-overhead dynamic Watermarks suitable for Automation in LLVM [Source code and Raw Experiment data]
(2024)
DOI: 10.5281/zenodo.13337275
BibTeX: Download
(anderer)
, , , , :
2023
Multipurpose Cacheing to Accelerate OpenMP Target Regions on FPGAs (Best Paper Award)
Proceedings of the 19th International Workshop on OpenMP, IWOMP 2023 (Bristol, GBR, 13.09.2023 - 15.09.2023)
In: Simon McIntosh-Smith, Tom Deakin, Michael Klemm, Bronis R. de Supinski, Jannis Klinkenberg (ed.): OpenMP: Advanced Task-Based, Device and Compiler Programming 2023
DOI: 10.1007/978-3-031-40744-4_10
BibTeX: Download
, , :
Multipurpose Cacheing to Accelerate OpenMP Target Regions on FPGAs [Data set]
14114 (2023), p. 147 - 162
ISSN: 0302-9743
DOI: 10.5281/zenodo.8055889
BibTeX: Download
(online publication)
, , :
Practical Flaky Test Prediction using Common Code Evolution and Test History Data
16th IEEE International Conference on Software Testing, Verification and Validation, ICST 2023 (Dublin, 16.04.2023 - 20.04.2023)
In: IEEE (ed.): Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation, ICST 2023 2023
DOI: 10.1109/ICST57152.2023.00028
BibTeX: Download
, , , , :
Practical Flaky Test Prediction using Common Code Evolution and Test History Data [replication package]
figshare (2023)
DOI: 10.6084/m9.figshare.21363075
BibTeX: Download
(online publication)
, , , , :
Employing Polyhedral Methods to Reduce Data Movement in FPGA Stencil Codes
Languages and Compilers for Parallel Computing (LCPC 2022) (Chicago, IL, 12.10.2022 - 14.10.2022)
In: Charith Mendis, Lawrence Rauchwerger (ed.): Proc. of the 35rd Intl. Workshop on Languages and Compilers for Parallel Computing (LCPC 2022), Cham: 2023
DOI: 10.1007/978-3-031-31445-2_4
BibTeX: Download
, , :
Forschungsdaten Test-Eintrag 1 (weiterer Typ, online, Band) [Data set]
Zenodo (2023)
DOI: 10.5281/zenodo.2558378
URL: https://zenodo.org/record/2558378
BibTeX: Download
(online publication)
:
Forschungsdaten Test-Eintrag 2 (weiterer Typ, anderer, Band) [Data set]
Zenodo (2023)
DOI: 10.5281/zenodo.2558378
URL: https://zenodo.org/record/2558378
BibTeX: Download
(anderer)
:
2022
Reducing OpenMP to FPGA Round-trip Times with Predictive Modelling
18th International Workshop on OpenMP (IWOMP 2022) (Chattanooga, TN, 27.09.2022 - 30.09.2022)
In: Michael Klemm, Bronis R. de Supinski, Jannis Klinkenberg, Brandon Neth (ed.): OpenMP in a Modern World: From Multi-device Support to Meta Programming 2022
DOI: 10.1007/978-3-031-15922-0
BibTeX: Download
, , :
Reducing OpenMP to FPGA Round-trip Times with Predictive Modelling [Data set]
Zenodo (2022)
DOI: 10.5281/zenodo.7534795
BibTeX: Download
(online publication)
, , :
Static And Dynamic Dependency Visualization in a Layered Software City
In: SN Computer Science 3 (2022), p. Article 511
ISSN: 2661-8907
DOI: 10.1007/s42979-022-01404-6
BibTeX: Download
, :
Trace visualization within the Software City metaphor: Controlled experiments on program comprehension
In: Information and Software Technology 150 (2022), p. Article 106989
ISSN: 0950-5849
DOI: 10.1016/j.infsof.2022.106989
BibTeX: Download
, :
The ORKA-HPC Compiler — Practical OpenMP for FPGAs
34th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2021) (Newark, DE, 13.10.2021 - 14.10.2021)
In: Xiaoming Li, Sunita Chandrasekaran (ed.): Proceedings of the 34th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2021), Cham: 2022
DOI: 10.1007/978-3-030-99372-6
URL: https://lcpc2021.github.io/pre_workshop_papers/Mayer_lcpc21.pdf
BibTeX: Download
, , , , :
2021
Cloud Cost City: A Visualization of Cloud Costs Using the City Metaphor
16th International Conference on Information Visualization Theory and Applications (IVAPP) (Virtual, originally Vienna, Austria, 08.02.2021 - 10.02.2021)
In: Christophe Hurter, Helen Purchase, Jose Braz, Kadi Bouatouch (ed.): Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP) - Volume 3: IVAPP, Portugal: 2021
DOI: 10.5220/0010254701730180
BibTeX: Download
, :
Trace Visualization within the Software City Metaphor: A Controlled Experiment on Program Comprehension
IEEE Working Conference on Software Visualization (VISSOFT) (Virtual, originally Luxembourg City, Luxembourg, 27.09.2021 - 28.09.2021)
In: Proceedings of the IEEE Working Conference on Software Visualization (VISSOFT) 2021
DOI: 10.1109/VISSOFT52517.2021.00015
BibTeX: Download
, :
A layered software city for dependency visualization (Best Paper Award)
16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2021 (Virtual, originally Vienna, Austria, 08.02.2021 - 10.02.2021)
In: Christophe Hurter, Helen Purchase, Jose Braz, Kadi Bouatouch (ed.): Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP, Portugal: 2021
DOI: 10.5220/0010180200150026
URL: http://www.ivapp.visigrapp.org
BibTeX: Download
, , :
Approximate Bit Dependency Analysis to Identify Program Synthesis Problems as Infeasible
International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI'2021) (Copenhagen, 17.01.2021 - 19.01.2021)
In: Fritz Henglein, Sharon Shoham, Yakir Vizel (ed.): Verification, Model Checking, and Abstract Interpretation (VMCAI 2021), Cham: 2021
DOI: 10.1007/978-3-030-67067-2_16
URL: https://i2git.cs.fau.de/i2public/publications/-/raw/master/vmcai2021.pdf
BibTeX: Download
, :
Test Case Reduction: A Framework, Benchmark, and Comparative Study
International Conference on Software Maintenance and Evolution (ICSME 2021) (Luxembourg (LU), 27.09.2021 - 01.10.2021)
In: Proceedings of the International Conference on Software Maintenance and Evolution (ICSME 2021) 2021
DOI: 10.1109/ICSME52107.2021.00012
URL: https://i2git.cs.fau.de/i2public/publications/-/raw/master/ICSME21.pdf
BibTeX: Download
, , :
LLWM & IR-Mark: Integrating Software Watermarks into an LLVM-based Framework
ACM SIGSAC Conference on Computer and Communications Security (CCS'21), Workshop on Offensive and Defensive Techniques in the Context of Man At The End (MATE) Attacks (Checkmate ’21) (Republic of Korea, 19.11.2021 - 19.11.2021)
In: Checkmate '21: Proceedings of the 2021 Research on offensive and defensive techniques in the Context of Man At The End (MATE) Attacks, New York: 2021
DOI: 10.1145/3465413.3488576
BibTeX: Download
, , :
2020
MutantDistiller: Using Symbolic Execution for Automatic Detection of Equivalent Mutants and Generation of Mutant Killing Tests
15th International Workshop on Mutation Analysis (Mutation 2020) (Porto, 24.10.2020 - 24.10.2020)
In: 2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW) 2020
DOI: 10.1109/ICSTW50294.2020.00055
URL: https://mutation-workshop.github.io/2020/
BibTeX: Download
, , :
RNN-aided Human Velocity Estimation from a Single IMU
In: Sensors 13 (2020), p. 1-31
ISSN: 1424-8220
DOI: 10.3390/s20133656
URL: https://www.mdpi.com/1424-8220/20/13/3656
BibTeX: Download
, , , , , :
Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments
15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (GRAPP 2020) (Valletta, 27.02.2020 - 29.02.2020)
In: Kadi Bouatouch, A. Augusto Sousa, Jose Braz (ed.): Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, Portugal: 2020
DOI: 10.5220/0008989903070318
URL: http://www.grapp.visigrapp.org/
BibTeX: Download
, , , , , :
Towards Collaborative and Dynamic Software Visualization in VR
15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (GRAPP 2020) (Valletta, 27.02.2020 - 29.02.2020)
In: Andreas Kerren, Christophe Hurter, Jose Braz (ed.): Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP, Portugal: 2020
DOI: 10.5220/0008945201490156
URL: http://www.ivapp.visigrapp.org/
BibTeX: Download
, , :
Artifact for "Approximate Bit Dependency Analysis to Identify Program Synthesis Problems as Infeasible"
Zenodo (2020)
DOI: 10.5281/zenodo.4275482
BibTeX: Download
(online publication)
, :
Language-Agnostic Generation of Compilable Test Programs
International Conference on Software Testing, Verification and Validation (ICST 2020) (Porto, 24.10.2020 - 28.10.2020)
In: Proceedings of the International Conference on Software Testing, Verification and Validation (ICST 2020) 2020
DOI: 10.1109/ICST46399.2020.00015
URL: https://i2git.cs.fau.de/i2public/publications/-/raw/master/ICST20.pdf
BibTeX: Download
, , :
2019
GPU-Accelerated Fixpoint Algorithms for Faster Compiler Analyses (Best Paper Award)
28th International Conference on Compiler Construction (Washington, D.C., 16.02.2019 - 17.02.2019)
In: ACM (ed.): Proceedings of the 28th International Conference on Compiler Construction, New York, NY, USA: 2019
DOI: 10.1145/3302516.3307352
URL: http://www2.informatik.uni-erlangen.de/publication/download/cc19_parcan_blass.pdf
BibTeX: Download
, :
Which Graph Representation to Select for Static Graph-Algorithms on a CUDA-capable GPU
12th Workshop on General Purpose Processing Using GPUs (Providence, RI, 13.04.2019 - 13.04.2019)
In: ACM (ed.): Proceedings of the 12th Workshop on General Purpose Processing Using GPUs, New York, NY, USA: 2019
DOI: 10.1145/3300053.3319416
URL: http://ieeetcca.org/2018/12/16/12th-workshop-on-general-purpose-processing-using-gpu-gpgpu-2019-asplos-2019/
BibTeX: Download
, :
Efficient Inspected Critical Sections in Data-Parallel GPU Codes
30th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2017) (College Station, TX, 11.10.2017 - 13.10.2017)
In: Rauchwerger, Lawrence (ed.): Proceedings of the 30th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2017), Cham: 2019
DOI: 10.1007/978-3-030-35225-7_15
URL: https://www2.cs.fau.de/publication/download/lcpc2017_blass.pdf
BibTeX: Download
, , :
A Bidirectional LSTM for Estimating Dynamic Human Velocities from a Single IMU
10th International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Pisa, 30.09.2019 - 03.10.2019)
In: IEEE (ed.): Proceedings of the 10th International Conference on Indoor Positioning and Indoor Navigation (IPIN) 2019
DOI: 10.1109/IPIN.2019.8911814
URL: https://www2.cs.fau.de/publication/download/IPIN2019.pdf
BibTeX: Download
, , , , , :
Sick Moves! Motion Parameters as Indicators of Simulator Sickness
In: IEEE Transactions on Visualization and Computer Graphics 25 (2019), p. 3146-3157
ISSN: 1077-2626
DOI: 10.1109/TVCG.2019.2932224
URL: https://ieeexplore.ieee.org/document/8798880
BibTeX: Download
, , , , , , , :
SeSaMe: A Data Set of Semantically Similar Java Methods
16th International Conference on Mining Software Repositories (MSR 2019) (Montréal, QC, Kanada, 26.05.2019 - 27.05.2019)
In: Proceedings of the 16th International Conference on Mining Software Repositories (MSR 2019), Piscataway, NJ, USA: 2019
DOI: 10.1109/MSR.2019.00079
URL: https://i2git.cs.fau.de/i2public/publications/-/raw/master/MSR19.pdf
BibTeX: Download
, , :
SeSaMe: A Data Set of Semantically Similar Java Methods [Data set]
Zenodo (2019)
DOI: 10.5281/zenodo.2558377
BibTeX: Download
(online publication)
, , :
OpenMP on FPGAs - A Survey
15th International Workshop on OpenMP (IWOMP 2019) (Auckland, 11.09.2019 - 13.09.2019)
In: Xing Fan, Bronis R. de Supinski, Oliver Sinnen, Nasser Giacaman (ed.): OpenMP: Conquering the Full Hardware Spectrum - Proceedings of the 15th International Workshop on OpenMP (IWOMP 2019), Cham: 2019
DOI: 10.1007/978-3-030-28596-8_7
URL: https://link.springer.com/content/pdf/10.1007/978-3-030-28596-8_7.pdf
BibTeX: Download
, , :
2018
FAU-Inf2/ARES: PhD Thesis Version
Zenodo (2018)
DOI: 10.5281/zenodo.1183903
BibTeX: Download
(online publication)
, , :
FAU-Inf2/tree-measurements: PhD Thesis Version [Data set]
Zenodo (2018)
DOI: 10.5281/zenodo.1183900
BibTeX: Download
(online publication)
, :
Head-to-Body-Pose Classification in No-Pose VR Tracking Systems
25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) (Reutlingen, 18.03.2018 - 22.03.2018)
In: Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) 2018
DOI: 10.1109/VR.2018.8446495
URL: http://www2.informatik.uni-erlangen.de/publication/download/IEEE-VR2018b.pdf
BibTeX: Download
, , :
Human Compensation Strategies for Orientation Drifts
25th IEEE Conference on Virtual Reality and 3D User Interfaces (Reutlingen, 18.03.2018 - 22.03.2018)
In: Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) 2018
DOI: 10.1109/VR.2018.8446300
URL: https://www2.cs.fau.de/publication/download/IEEE-VR2018a.pdf
BibTeX: Download
, , :
Supervised Learning for Yaw Orientation Estimation
(2018)
ISSN: 2471-917X
DOI: 10.1109/IPIN.2018.8533811
URL: https://www2.cs.fau.de/publication/download/IPIN2018a.pdf
BibTeX: Download
, , :
Recurrent Neural Networks on Drifting Time-of-Flight Measurements
9th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2018) (Nantes, 24.09.2018 - 27.09.2018)
In: Proceedings of the 9th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2018) 2018
DOI: 10.1109/IPIN.2018.8533813
URL: https://www2.cs.fau.de/publication/download/IPIN2018b.pdf
BibTeX: Download
, , , , :
Optical Camera Communication for Active Marker Identification in Camera-based Positioning Systems
15th Workshop on Positioning, Navigation and Communications (WPNC'18) (Bremen, 25.10.2018 - 26.10.2018)
In: Proceedings of the 15th Workshop on Positioning, Navigation and Communications (WPNC'18) 2018
DOI: 10.1109/WPNC.2018.8555846
URL: https://www2.cs.fau.de/publication/download/WPNC2018.pdf
BibTeX: Download
, , , :
FAU-Inf2/cthree: PhD Thesis Version
Zenodo (2018)
DOI: 10.5281/zenodo.1183902
BibTeX: Download
(online publication)
, , :
2017
More Accurate Recommendations for Method-Level Changes
11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE2017) (Paderborn, 04.09.2017 - 08.09.2017)
In: Proceedings of 2017 11th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE2017), New York, NY, USA: 2017
DOI: 10.1145/3106237.3106276
URL: https://www2.cs.fau.de/publication/download/ESECFSE17.pdf
BibTeX: Download
, , , :
FAU-Inf2/treedifferencing: Version of the ASE Publication 2016
Zenodo (2017)
DOI: 10.5281/zenodo.840877
BibTeX: Download
(online publication)
, :
Acoustical manipulation for redirected walking
23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17) (Gothenburg, 08.11.2017 - 10.11.2017)
In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17), New York: 2017
DOI: 10.1145/3139131.3141205
URL: https://www2.cs.fau.de/publication/download/VRST2017.pdf
BibTeX: Download
, , , :
Diff Graphs for a fast Incremental Pointer Analysis
12th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems (ICOOOLPS 2017) (Barcelona, 19.06.2017 - 19.06.2017)
In: ACM (ed.): Proceedings of the 12th Workshop on Implementation, Compilation, Optimization of Object-Oriented Languages, Programs and Systems (ICOOOLPS'17) 2017
DOI: 10.1145/3098572.3098578
BibTeX: Download
, :
Chronix: Long Term Storage and Retrieval Technology for Anomaly Detection in Operational Data
15th USENIX Conference on File and Storage Technologies (FAST 17) (Santa Clara, CA, 27.02.2017 - 02.03.2017)
In: USENIX Association (ed.): Proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST 17) 2017
Open Access: https://www.usenix.org/conference/fast17/technical-sessions/presentation/lautenschlager
URL: https://www.usenix.org/system/files/conference/fast17/fast17-lautenschlager.pdf
BibTeX: Download
, , , :
AuDoscore: Automatic Grading of Java or Scala Homework
Third Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2017) (Potsdam, 05.10.2017 - 06.10.2017)
In: Sven Strickroth Oliver Müller Michael Striewe (ed.): Proceedings of the Third Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2017) 2017
Open Access: http://ceur-ws.org/Vol-2015/ABP2017_paper_01.pdf
URL: http://ceur-ws.org/Vol-2015/ABP2017_paper_01.pdf
BibTeX: Download
, , :
2016
Move-Optimized Source Code Tree Differencing
31st International Conference on Automated Software Engineering (ASE 2016) (Singapore, 03.09.2016 - 09.09.2016)
In: Proceedings of the 31st International Conference on Automated Software Engineering (ASE 2016) 2016
DOI: 10.1145/2970276.2970315
BibTeX: Download
, :
Automatic clustering of code changes
13th International Conference on Mining Software Repositories (MSR 2016) (Austin, TX, USA, 14.05.2016 - 15.05.2016)
In: Proceedings of the 13th International Conference on Mining Software Repositories (MSR'16) 2016
DOI: 10.1145/2901739.2901749
URL: http://dl.acm.org/citation.cfm?id=2901749
BibTeX: Download
, , , , :
2015
Simultaneous inspection: Hiding the overhead of inspector-executor style dynamic parallelization
International Workshop on Languages and Compilers for Parallel Computing (LCPC 2014) (Hillsboro, OR, USA, 15.09.2014 - 17.09.2014)
In: James Brodman, Peng Tu (ed.): Languages and Compilers for Parallel Computing, 27th International Workshop, LCPC 2014, Berlin Heidelberg: 2015
DOI: 10.1007/978-3-319-17473-0_7
BibTeX: Download
, , :
Fast and efficient operational time series storage: The missing link in dynamic software analysis
Symposium on Software Performance (SSP 2015) (München, 04.11.2015 - 06.11.2015)
In: Softwaretechnik-Trends (Band 35, Nr. 3): Proceedings of the Symposium on Software Performance (SSP 2015) 2015
URL: http://pi.informatik.uni-siegen.de/gi/stt/35_3/03_Technische_Beitraege/SSP_2015_paper_10.pdf
BibTeX: Download
, , , :
Rahmenwerk zur Ausreißererkennung in Zeitreihen von Software-Laufzeitdaten
Fachtagung Software Engineering & Management (SE 2015) (Dresden, Deutschland, 17.03.2015 - 20.03.2015)
In: Uwe Aßmann, Birgit Demuth, Thorsten Spitta, Georg Püschel, Ronny Kaiser (ed.): Software Engineering & Management (SE 2015), Bonn: 2015
URL: http://www2.informatik.uni-erlangen.de/publication/download/SE2015.pdf
BibTeX: Download
, , , :
Approximative Event Processing on Sensor Data Streams (Best Poster and Demostration Award)
9th ACM International Conference on Distributed Event-Based Systems (DEBS'15) (Oslo, 29.06.2015 - 03.07.2015)
In: Proceedings of the 9th ACM International Conference on Distributed Event-Based Systems (DEBS'15) 2015
DOI: 10.1145/2675743.2776767
URL: http://www2.informatik.uni-erlangen.de/publication/download/DEBS2015.pdf
BibTeX: Download
, , :
Concurrent Computing in the Many-core Era (Dagstuhl Seminar 15021)
Dagstuhl, Germany: 2015
(Dagstuhl Reports, Vol. 5)
DOI: 10.4230/DagRep.5.1.1
URL: http://drops.dagstuhl.de/opus/volltexte/2015/5010
BibTeX: Download
, , , (ed.):
2014
Using Multi Level-Modeling Techniques for Managing Mapping Information
1st Int. Workshop on Multi-Level Modelling, ACM/IEEE 17th International Conference on Model Driven Engineering Languages and Systems (Valencia, Spain, 28.09.2014 - 28.09.2014)
In: Proceedings of the 1st Int. Workshop on Multi-Level Modelling, ACM/IEEE 17th International Conference on Model Driven Engineering Languages and Systems, Aachen: 2014
URL: http://ceur-ws.org/Vol-1286/p11.pdf
BibTeX: Download
, , , , :
Combining Lock Inference with Lock-Based Software Transactional Memory
26th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2013) (San Jose, California, USA, 25.09.2013 - 27.09.2013)
In: Călin Cașcaval, Pablo Montesinos (ed.): Languages and Compilers for Parallel Computing, 26th International Workshop, LCPC 2013, Berlin Heidelberg: 2014
DOI: 10.1007/978-3-319-09967-5_19
BibTeX: Download
, , :
Design for Diagnosability
In: Java Magazin (2014), p. 44-50
ISSN: 1619-795X
BibTeX: Download
, , , :
Predictive Load Management in Smart Grid Environments
8th ACM International Conference on Distributed Event-Based Systems (DEBS'14) (Mumbai, 26.05.2014 - 29.05.2014)
In: Proceedings of the 8th ACM International Conference on Distributed Event-Based Systems (DEBS'14), New York: 2014
DOI: 10.1145/2611286.2611330
URL: http://www2.informatik.uni-erlangen.de/publication/download/DEBS2014.pdf
BibTeX: Download
, , , , :
Adaptive Speculative Processing of Out-of-Order Event Streams
In: ACM Transactions on Internet Technology 14 (2014), p. 4:1-4:24
ISSN: 1533-5399
DOI: 10.1145/2633686
URL: https://www2.cs.fau.de/publication/download/ACM_TOIT2014.pdf
BibTeX: Download
, :
A Modular and Statically Typed Effectful Stack for Custom Graph Traversals
8th International Workshop on Graph-Based Tools (GraBaTs 2014) (York, UK, 25.07.2014 - 25.07.2014)
In: Tichy, Matthias ; Westfechtel, Bernhard (ed.): Proceedings of the 8th International Workshop on Graph-Based Tools (GraBaTs 2014) 2014
DOI: 10.14279/tuj.eceasst.68.952
URL: http://journal.ub.tu-berlin.de/eceasst/article/view/952
BibTeX: Download
, :
2013
Source Code Transformations to Increase the Performance of Software Transactional Memory
24th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2011) (Fort Collins, Colorado, USA, 08.09.2011 - 10.09.2011)
In: Sanjay Rajopadhye, Michelle Mills Strout (ed.): Languages and Compilers for Parallel Computing, 24th International Workshop, LCPC 2011, Berlin Heidelberg: 2013
DOI: 10.1007/978-3-642-36036-7
BibTeX: Download
, , :
Compiler-Guided Identification of Critical Sections in Parallel Code
International Conference on Compiler Construction (Rome, Italy, 16.03.2013 - 24.03.2013)
In: Ranjit Jhala, Koen De Bosschere (ed.): Compiler Construction, 22nd International Conference, CC 2013, Berlin Heidelberg: 2013
DOI: 10.1007/978-3-642-37051-9_11
URL: https://www2.informatik.uni-erlangen.de/publication/download/CC2013.pdf
BibTeX: Download
, , :
Reduktion von False-Sharing in Software-Transactional-Memory
25th Workshop der GI-Fachgruppe Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware (PARS 2013) (Erlangen, Germany, 11.04.2013 - 12.04.2013)
In: Parallel-Algorithmen und Rechnerstrukturen (PARS 2013) 2013
URL: https://www2.cs.fau.de/publication/download/STM_PARS2013.pdf
BibTeX: Download
, , :
Evolutionary Algorithms that use Runtime Migration of Detector Processes to Reduce Latency in Event-Based Systems
2013 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2013) (Torino, Italy, 25.06.2013 - 27.06.2013)
In: Proceedings of the 2013 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2013) 2013
DOI: 10.1109/AHS.2013.6604223
URL: http://www2.informatik.uni-erlangen.de/publication/download/AHS2013.pdf
BibTeX: Download
, , :
Distributed Low-Latency Out-of-Order Event Processing for High Data Rate Sensor Streams
27th IEEE International Parallel & Distributed Processing Symposium (IPDPS'13) (Boston, Massachusetts, 20.05.2013 - 24.05.2013)
In: Proceedings of 27th International Parallel and Distributed Processing Symposium (IPDPS'13) 2013
DOI: 10.1109/IPDPS.2013.29
URL: http://www2.informatik.uni-erlangen.de/publication/download/IPDPS2013.pdf
BibTeX: Download
, :
Dynamic Low-Latency Distributed Event Processing of Sensor Data Streams
25th Workshop der GI-Fachgruppe Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware (PARS 2013) (Erlangen, 11.04.2013 - 12.04.2013)
In: Parallel-Algorithmen und Rechnerstrukturen (PARS 2013) 2013
URL: http://www2.informatik.uni-erlangen.de/publication/download/PARS2013.pdf
BibTeX: Download
, :
Reliable Speculative Processing of Out-of-Order Event Streams in Generic Publish/Subscribe Middlewares
7th ACM International Conference on Distributed Event-Based Systems (DEBS'13) (Arlington, Texas, 29.06.2013 - 03.07.2013)
In: Proceedings of the 7th ACM International Conference on Distributed Event-Based Systems (DEBS'13) 2013
DOI: 10.1145/2488222.2488263
URL: http://www2.informatik.uni-erlangen.de/publication/download/DEBS2013.pdf
BibTeX: Download
, :
Runtime Migration of Stateful Event Detectors with Low-Latency Ordering Constraints
9th International Workshop on Sensor Networks and Systems for Pervasive Computing (San Diego, CA, 18.03.2013 - 22.03.2013)
In: Proceedings of the 2013 IEEE International Conference on Pervasive Computing and Communications Workshops 2013
DOI: 10.1109/PerComW.2013.6529567
URL: http://www2.informatik.uni-erlangen.de/publication/download/persens2013.pdf
BibTeX: Download
, :
Double Inspection for Run-Time Loop Parallelization
24th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2011) (Fort Collins, CO, USA, 08.09.2011 - 10.09.2011)
In: Sanjay Rajopadhye, Michelle Mills Strout (ed.): Languages and Compilers for Parallel Computing, 24th International Workshop, LCPC 2011, Berlin Heidelberg: 2013
DOI: 10.1007/978-3-642-36036-7_4
BibTeX: Download
, , :
Language and Runtime Techniques for better Model Checking Efficiency of Parallel Programs
26th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2013) (San Jose, California, USA, 25.09.2013 - 27.09.2014)
In: Călin Cașcaval, Pablo Montesinos (ed.): Proceedings of the 26th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2013), Berlin Heidelberg: 2013
DOI: 10.1007/978-3-319-09967-5
BibTeX: Download
, :
CellCilk: Extending Cilk for heterogeneous multicore platforms
24th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2011) (Fort Collins, Colorado, USA, 08.09.2011 - 10.09.2011)
In: Rajopadhye, S.; Strout, M. Mills (ed.): Languages and Compilers for Parallel Computing, 24th International Workshop, LCPC 2011for Parallel Computing (LCPC 2011), Berlin Heidelberg: 2013
DOI: 10.1007/978-3-642-36036-7_7
URL: http://www2.informatik.uni-erlangen.de/publication/download/CellCilk11.pdf
BibTeX: Download
, , :
Object Support for OpenMP-style Programming of GPU Clusters in Java
27th International Conference on Advanced Information Networking and Applications Workshops (Barcelona, Spain, 25.03.2013 - 28.03.2013)
In: Proceedings of the 27th International Conference on Advanced Information Networking and Applications Workshops (WAINA 2013) 2013
DOI: 10.1109/WAINA.2013.62
URL: http://www2.informatik.uni-erlangen.de/publication/download/WDVP13.pdf
BibTeX: Download
, , , :
2012
Expressing Parallelism and Timing in Embedded Real-Time Applications
8th International Summer School on Advanced Computer Architecture and Compilation for High-Performance and Embedded Systems (Fiuggi, Italy, 11.07.2012 - 11.07.2012)
In: High-Performance and Embedded Architecture and Compilation (HiPEAC) Network of Excellence (ed.): 8th International Summer School on Advanced Computer Architecture and Compilation for High-Performance and Embedded Systems (ACACES) 2012 - Poster Abstracts, Ghent (Belgium): 2012
URL: https://www2.cs.fau.de/publication/download/braunstein_acaces_abstract.pdf
BibTeX: Download
, , :
Annotation Support for Generic Patches
International Workshop on Recommendation Systems for Software Engineering (Zurich, Switzerland, 04.06.2012 - 04.06.2012)
In: Proceedings of the Third International Workshop on Recommendation Systems for Software Engineering (RSSE 12) 2012
DOI: 10.1109/RSSE.2012.6233400
URL: http://www2.informatik.uni-erlangen.de/publication/download/DVP12.pdf
BibTeX: Download
, , :
An Integrated Tool Chain for Software Process Modeling and Execution
8th European Conference on Modeling Foundations and Applications (ECMFA 2012) (Lyngby, Denmark, 02.07.2012 - 05.07.2012)
In: Störrle, Harald ; Botterweck, Goetz ; Bourdellès, Michel ; Kolovos, Dimitris ; Paige, Richard ; Roubtsova, Ella ; Rubin, Julia ; Tolvanen, Juha-Pekka (ed.): Joint Proceedings of co-located Events at the 8th European Conference on Modeling Foundations and Applications (ECMFA 2012), Copenhagen, Denmark: 2012
URL: http://www2.imm.dtu.dk/conferences/ECMFA-2012/proceedings/PDF/ECMFA-2012-Workshop-Proceedings.pdf
BibTeX: Download
, , , , :
Learning Event Detection Rules with Noise Hidden Markov Models
2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2012) (Nuremberg, Germany, 25.06.2012 - 28.06.2012)
In: Proceedings of the 2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2012) 2012
DOI: 10.1109/AHS.2012.6268645
URL: http://www2.informatik.uni-erlangen.de/publication/download/AHS2012.pdf
BibTeX: Download
, :
Towards a Distributed Self-Optimizing Event Processing System for Realtime Locating Systems (RTLS)
6th ACM International Conference on Distributed Event-Based Systems (DEBS'12) (Berlin, 16.06.2012 - 20.06.2012)
In: DEBS PhD Workshops, 6th ACM International Conference on Distributed Event-Based Systems 2012
URL: http://www2.informatik.uni-erlangen.de/publication/download/DEBS2012.pdf
BibTeX: Download
, :
Multicore Software Engineering, Performance and Tools (Proceedings MSEPT 2012)
Berlin Heidelberg: 2012
(Lecture Notes in Computer Science (LNCS), Vol. 7303)
ISBN: 978-3-642-31201-4
DOI: 10.1007/978-3-642-31202-1
BibTeX: Download
, (ed.):
TracQL: A Domain-Specific Language for Traceability Analysis
Joint Working Conference on Software Architecture & 6th European Conference on Software Architecture (WICSA/ECSA 2012) (Helsinki, Finland, 20.08.2012 - 24.08.2012)
In: Ali Babar M., Cuesta C., Savolainen J., Männistö T. (ed.): Proceedings of the 2012 Joint Working Conference on Software Architecture & 6th European Conference on Software Architecture, Los Alamitos, CA: 2012
DOI: 10.1109/WICSA-ECSA.212.53
BibTeX: Download
, , :
Parallel Memory Defragmentation on a GPU
ACM SIGPLAN Workshop on Memory Systems Performance and Correctness (MSPC 12) (Beijing, China, 16.06.2012 - 16.06.2012)
In: Proceedings of the 2012 ACM SIGPLAN Workshop on Memory Systems Performance and Correctness (MSPC'12) 2012
DOI: 10.1145/2247684.2247693
URL: http://www2.informatik.uni-erlangen.de/publication/download/VP12.pdf
BibTeX: Download
, :
2011
ReflexML: UML-based architecture-to-code traceability and consistency checking
5th European Conference on Software Architecture, ECSA 2011 (Essen, 13.09.2011 - 16.09.2011)
In: Ivica Crnkovic, Volker Gruhn, Matthias Book (ed.): Software Architecture Software Architecture, 5th European Conference, ECSA 2011, Berlin Heidelberg: 2011
DOI: 10.1007/978-3-642-23798-0_37
URL: http://link.springer.com/chapter/10.1007/978-3-642-23798-0_37
BibTeX: Download
, :
Trajectory Behavior Language
2nd International Conference on Positioning and Context-Awareness (PoCA 2011) (Brussels, 24.03.2011 - 24.03.2011)
In: Proceedings of the 2nd International Conference on Positioning and Context-Awareness (PoCA 2011), Antwerpen: 2011
URL: http://www2.informatik.uni-erlangen.de/publication/download/PoCA2011b.pdf
BibTeX: Download
, , :
A FUML-Based Distributed Execution Machine for Enacting Software Process Models
Modelling Foundations and Applications (Birmingham, UK, 06.06.2011 - 09.06.2011)
In: France, Robert ; Kuester, Jochen ; Bordbar, Behzad ; Paige, Richard (ed.): Proceedings 7th European Conference on Modeling Foundations and Applications, Berlin Heidelberg: 2011
DOI: 10.1007/978-3-642-21470-7_3
BibTeX: Download
, , , , , :
Is There Hope for Automatic Parallelization of Legacy Industry Automation Applications?
24th Workshop der GI-Fachgruppe Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware (PARS 2011) (Rüschlikon, Switzerland, 25.05.2011 - 27.05.2011)
In: Parallel-Algorithmen und Rechnerstrukturen (PARS 2011) 2011
URL: http://www2.informatik.uni-erlangen.de/publication/download/PARS2011.pdf
BibTeX: Download
, , :
Structural Equivalence Partition and Boundary Testing
Software Engineering 2011 - Fachtagung des GI-Fachbereichs Softwaretechnik (Karlsruhe, 24.02.2011 - 25.02.2011)
In: Ralf Reussner, Matthias Grund, Andreas Oberweis, Walter Tichy (ed.): Lecture Notes in Informatics (LNI), P-183, Bonn: 2011
URL: https://www2.cs.fau.de/publication/download/SE2011-OsterPhilippsen-SEBT.pdf
BibTeX: Download
, :
JavaParty
In: Padua, David (ed.): Encyclopedia of Parallel Computing, New York: Springer US, 2011, p. 992-997
ISBN: 978-0-387-09765-7
DOI: 10.1007/978-0-387-09766-4_49
BibTeX: Download
:
Proceedings of the 4th International Workshop on Multicore Software Engineering (IWMSE'11)
New York: 2011
(International Conference on Software Engineering ICSE, Vol. 2011, Waikiki, Honolulu, HI, USA)
ISBN: 978-1-4503-0577-8
URL: http://dl.acm.org/citation.cfm?id=1984693&CFID=625143511&CFTOKEN=93536545
BibTeX: Download
, (ed.):
A Statically Typed Query Language for Property Graphs
15th International Database Engineering and Applications Symposium (IDEAS'11) (Lissabon, Portugal, 21.09.2011 - 23.09.2011)
In: Bernardino, Jorge; Cruz, Isabel; Desai, Bipin C. (ed.): Proceedings of 15th International Database Engineering and Applications Symposium (IDEAS'11), New York: 2011
DOI: 10.1145/2076623.2076653
URL: http://www2.informatik.uni-erlangen.de/publication/download/Ntausch_ideas11.pdf
BibTeX: Download
, , :
Enabling Multiple Accelerator Acceleration for Java/OpenMP
3rd USENIX Workshop on Hot Topics in Parallelism (HotPar '11) (Berkeley, CA, 26.05.2011 - 27.05.2011)
In: Proceedings 3rd USENIX Workshop on Hot Topics in Parallelism (HotPar '11) 2011
URL: http://www.usenix.org/event/hotpar11/tech/final_files/Veldema.pdf
BibTeX: Download
, , :
A Hybrid Functional and Object-Oriented Language for a Multi-Core Future
In: it - Information Technology 53 (2011), p. 84-90
ISSN: 1611-2776
DOI: 10.1524/itit.2011.0629
URL: http://www2.informatik.uni-erlangen.de/publication/download/itit.2011.0629.pdf
BibTeX: Download
, :
Iterative data-parallel mark & sweep on a GPU
International Symposium on Memory Management (ISMM '11) (San Jose, California, USA, 04.06.2011 - 08.06.2011)
In: Boehm, Hans-Juergen ; Bacon, David F. (ed.): Proceedings of the International Symposium on Memory Management (ISMM'11), New York: 2011
DOI: 10.1145/1993478.1993480
URL: http://doi.acm.org/10.1145/1993478.1993480
BibTeX: Download
, :
2010
eSPEM - A SPEM Extension for Enactable Behavior Modeling
ECMFA 2010, 6th European Conference of Modelling Foundations and Applications (Paris, France, 15.06.2010 - 18.06.2010)
In: Kühne, Thomas ; Selic, Bran ; Gervais, Marie-Pierre ; Terrier, Francois (ed.): 6th European Conference of Modelling Foundations and Applications, Berlin Heidelberg: 2010
DOI: 10.1007/978-3-642-13595-8
URL: http://www2.informatik.uni-erlangen.de/publication/download/ecmfa2010_eSPEM.pdf
BibTeX: Download
, , , , , :
Irregular data-parallelism in a parallel object-oriented language by means of Collective Replication
CS-2010-04 (2010), p. 14
ISSN: 2191-5008
Open Access: https://opus4.kobv.de/opus4-fau/frontdoor/index/index/docId/1544
URL: http://www.opus.ub.uni-erlangen.de/opus/frontdoor.php?source_opus=2268
BibTeX: Download
(Techreport)
, , :
Proceedings of the 3rd International Workshop on Multicore Software Engineering (IWMSE'10)
New York: 2010
(International Conference on Software Engineering ICSE, Vol. 2010, Cape Town, South Africa)
ISBN: 978-1-60558-964-0
URL: http://dl.acm.org/citation.cfm?id=1808954&picked=prox&cfid=625143511&cftoken=93536545
BibTeX: Download
, (ed.):
Safe and Familiar Multi-core Programming by means of a Hybrid Functional and Imperative Language
22nd International Workshop of Languages and Compilers for Parallel Computing (LCPC 2009) (Newark, DE, 08.10.2009 - 10.10.2009)
In: Guang R. Gao, Lori L. Pollock, John Cavazos, Xiaoming Li (ed.): Languages and Compilers for Parallel Computing, 22nd International Workshop, LCPC 2009, Berlin Heidelberg: 2010
DOI: 10.1007/978-3-642-13374-9_11
URL: http://www2.informatik.uni-erlangen.de/publication/download/LCPC_hpc_tapir.pdf
BibTeX: Download
, :
2009
A meta-predictor framework for prefetching in object-based DSMs
In: Concurrency and Computation-Practice & Experience 21 (2009), p. 1789-1803
ISSN: 1532-0626
DOI: 10.1002/cpe.1443
URL: http://www2.informatik.uni-erlangen.de/publication/download/BKCP09.pdf
BibTeX: Download
, , , :
Reparallelization Techniques for Migrating OpenMP Codes in Computational Grids
In: Concurrency and Computation-Practice & Experience 21 (2009), p. 281-299
ISSN: 1532-0626
DOI: 10.1002/cpe.1356
URL: http://www2.informatik.uni-erlangen.de/publication/download/migration-OpenMP-CCPE.pdf
BibTeX: Download
, , , , :
Tapir: Language Support to Reduce the State Space in Model-Checking
ATPS 2009 - 4. Arbeitstagung Programmiersprachen (Lübeck, 28.09.2009 - 02.10.2009)
In: Fischer, Stefan ; Maehle, Erik ; Reischuck, Rüdiger (ed.): Informatik 2009 - Im Focus das Leben, Bonn: 2009
URL: http://www2.informatik.uni-erlangen.de/publication/download/ATPS09-tapir.pdf
BibTeX: Download
, :
DAG Mining for Code Compaction
In: Cao, L. ; Yu, P. S. ; Zhang, C. ; Zhang, H. (ed.): Data Mining for Business Applications, Berlin Heidelberg: Springer, 2009, p. 209-224
ISBN: 978-0-387-79419-8
DOI: 10.1007/978-0-387-79420-4_15
URL: http://www2.informatik.uni-erlangen.de/publication/download/WDWFP09.pdf
BibTeX: Download
, , , , :
Dynamic code footprint optimization for the IBM cell broadband engine
2009 ICSE Workshop on Multicore Software Engineering, IWMSE 2009 (Vancouver, BC, 18.05.2009 - 18.05.2009)
DOI: 10.1109/IWMSE.2009.5071385
URL: http://www2.informatik.uni-erlangen.de/publication/download/WFKSWP09.pdf
BibTeX: Download
, , , , , :
2008
Automatic Prefetching with Binary Code Rewriting in Object-based DSMs (Best Paper)
Euro-Par 2008 Conference (Las Palmas de Gran Canaria, Spain, 26.08.2008 - 29.08.2008)
In: Luque, Emilio ; Margalef, Tomàs ; Benítez, Domingo (ed.): EuroPar 2008 - Parallel Processing, Berlin Heidelberg: 2008
DOI: 10.1007/978-3-540-85451-7_69
URL: http://www2.informatik.uni-erlangen.de/publication/download/dynamic_prefetcher.pdf
BibTeX: Download
, , , :
A Proposal for OpenMP for Java
International Workshop on OpenMP (IWOMP'05) (Reims, France, 01.06.2005 - 04.06.2005)
In: Matthias S. Mueller, Barbara M. Chapman, Bronis R. de Supinski, Allen D. Malony, Michael Voss (ed.): OpenMP Shared Memory Parallel Programming, International Workshops IWOMP 2005 and IWOMP 2006, Berlin Heidelberg: 2008
DOI: 10.1007/978-3-540-68555-5_33
URL: http://www2.informatik.uni-erlangen.de/publication/download/java-openmp.pdf
BibTeX: Download
, , , :
An Automatic Cost-based Framework for Seamless Application Migration in Grid Environments
20th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS'08) (Orlando, FL, USA, 16.11.2008 - 18.11.2008)
In: Proceedings of the 20th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS'08), Anaheim, CA, USA: 2008
URL: http://www2.informatik.uni-erlangen.de/publication/download/OGRE-PDCS.pdf
BibTeX: Download
, , :
Cluster Research at the Programming Systems Group
(2008), p. 30-31
URL: http://www.rrze.uni-erlangen.de/wir-ueber-uns/publikationen/HPC-2008-Screenversion.pdf
BibTeX: Download
(anderer)
, , :
A DSM protocol aware of both thread migration and memory constraints (Best Paper)
20th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS'08) (Orlando, FL, USA, 16.11.2008 - 18.11.2008)
In: Gonzalez, Teofilo F. (ed.): Proceedings of the 20th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS'08), Anaheim, CA, USA: 2008
URL: http://www2.informatik.uni-erlangen.de/publication/download/LVM-PDCS.pdf
BibTeX: Download
, , :
Evaluation of RDMA opportunities in an Object-Oriented DSM
20th International Workshop on Languages and Compilers for Parallel Computing (LCPC '07) (Urbana, Illinois, 11.10.2007 - 13.10.2007)
In: Vikram Adve, María Jesús Garzarán, Paul Petersen (ed.): Languages and Compilers for Parallel Computing, 20th International Workshop, LCPC 2007, Berlin Heidelberg: 2008
DOI: 10.1007/978-3-540-85261-2_15
URL: http://www2.informatik.uni-erlangen.de/publication/download/LCPC_rdma.pdf
BibTeX: Download
, :
Supporting Huge Address Spaces in a Virtual Machine for Java on a Cluster
20th International Workshop on Languages and Compilers for Parallel Computing (LCPC '07) (Urbana, Illinois, 11.10.2007 - 13.10.2007)
In: Vikram Adve, María Jesús Garzarán, Paul Petersen (ed.): Languages and Compilers for Parallel Computing, 20th International Workshop, LCPC 2007, Berlin Heidelberg: 2008
DOI: 10.1007/978-3-540-85261-2_13
URL: http://www2.informatik.uni-erlangen.de/publication/download/LCPC_LVM.pdf
BibTeX: Download
, :
DAGMA: Mining Directed Acyclic Graphs (Outstanding Paper Award)
IADIS European Conference on Data Mining 2008 (Amsterdam, The Netherlands, 24.07.2008 - 26.07.2008)
In: Hans Weghorn ; Ajith P. Abraham (ed.): Proceedings of the IADIS European Conference on Data Mining, Amsterdam, The Netherlands: 2008
URL: http://www2.informatik.uni-erlangen.de/publication/download/ecdm2008-dagma.pdf
BibTeX: Download
, , , , :
2007
Graph-based procedural abstraction
International Symposium on Code Generation and Optimization, CGO 2007 (San Jose, CA, 11.03.2007 - 14.03.2007)
In: International Symposium on Code Generation and Optimization (CGO'07) 2007
DOI: 10.1109/CGO.2007.14
URL: http://www2.informatik.uni-erlangen.de/publication/download/cgo2007-shrink.pdf
BibTeX: Download
, , , , , :
Esodyp+: Prefetching in the Jackal Software DSM
Proceedings of the Euro-Par 2007 Conference (Rennes, France, 28.08.2007 - 31.08.2007)
In: Kermarrec, Anne-Marie; Bougé, Luc; Priol, Thierry (ed.): EuroPar 2007 - Parallel Processing, Berlin Heidelberg: 2007
DOI: 10.1007/978-3-540-74466-5_60
URL: http://www2.informatik.uni-erlangen.de/publication/download/esodyp.pdf
BibTeX: Download
, , , , :
Reparallelization and Migration of OpenMP Programs
7th International Symposium on Cluster Computing and the Grid (CCGrid '07) (Rio de Janeiro, Brazil, 14.05.2007 - 17.05.2007)
In: Proceedings of the 7th International Symposium on Cluster Computing and the Grid (CCGrid '07), New York, NY, USA: 2007
DOI: 10.1109/CCGRID.2007.96
URL: http://www2.informatik.uni-erlangen.de/publication/download/migration-OpenMP.pdf
BibTeX: Download
, , , , :
JaMP: An Implementation of OpenMP for a Java DSM
In: Concurrency and Computation-Practice & Experience 18 (2007), p. 2333-2352
ISSN: 1532-0626
DOI: 10.1002/cpe.1178
URL: http://www2.informatik.uni-erlangen.de/publication/download/jamp-journal.pdf
BibTeX: Download
, , , :
Reparallelisierung und Migration von OpenMP-Applikationen (Young Researchers Award)
21. Workshop der GI-Fachgruppe Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware (PARS 2007) (Hamburg, 31.05.2007 - 01.06.2007)
In: Parallel-Algorithmen und Rechnerstrukturen (PARS 2007) 2007
URL: http://www2.informatik.uni-erlangen.de/publication/download/migration-OpenMP-PARS.pdf
BibTeX: Download
, :
2006
JaMP: An Implementation of OpenMP for a Java DSM
Workshop on Compilers for Parallel Computers (A Coruna, Spain, 09.01.2006 - 11.01.2006)
In: Proceedings of the 12th Workshop on Compilers for Parallel Computers 2006
URL: http://www2.informatik.uni-erlangen.de/publication/download/jamp.pdf
BibTeX: Download
, , , :
Mining Molecular Datasets on Symmetric Multiprocessor Systems
2006 IEEE International Conference on Systems, Man and Cybernetics (Taipei, Taiwan, 08.10.2016 - 11.10.2006)
In: Proceedings of the 2006 IEEE International Converence on Systems, Man and Cybernetics, New York: 2006
DOI: 10.1109/ICSMC.2006.384889
URL: http://www2.informatik.uni-erlangen.de/publication/download/SMC2006.pdf
BibTeX: Download
, , , :
The ParMol package for frequent subgraph mining
Third International Workshop on Graph Based Tools (GraBaTs) (Natal, Brasil, 21.09.2006 - 22.09.2006)
In: Zündorf, Albert ; Varro, Daniel (ed.): Third International Workshop on Graph Based Tools 2006
URL: http://www2.informatik.uni-erlangen.de/publication/download/GraBaTs2006_ParMol.pdf
BibTeX: Download
, , , , :
The ParMol Package for Frequent Subgraph Mining
In: Electronic Communications of the EASST Volume 1 (2006), p. 1-12
ISSN: 1863-2122
URL: http://journal.ub.tu-berlin.de/eceasst/article/viewFile/85/63
BibTeX: Download
, , , , :
Edgar: the Embedding-baseD GrAph MineR
International Workshop on Mining and Learning with Graphs (Berlin, 18.09.2006 - 22.09.2006)
In: Gärtner, Thomas ; Garriga, Gemma C. ; Meinl, Thorsten (ed.): Proceedings of the International Workshop on Mining and Learning with Graphs 2006
URL: http://www2.informatik.uni-erlangen.de/publication/download/MLG2006_Edgar.pdf
BibTeX: Download
, , , , :
2005
Parallel Mining for Frequent Fragments on a Shared-Memory Multiprocessor -Results and Java-Obstacles-
Workshop der GI-Fachgruppe "Maschinelles Lernen, Wissensentdeckung, Data Mining" (FGML) (Saarbrücken, Germany, 10.10.2005 - 12.10.2005)
In: Bauer, Mathias ; Kröner, Alexander ; Brandherm, Boris (ed.): LWA 2005 - Beiträge zur GI-Workshopwoche Lernen, Wissensentdeckung, Adaptivität 2005
BibTeX: Download
, , :
Near Overhead-free Heterogeneous Thread-migration
2005 IEEE International Conference on Cluster Coomputing (Boston, Massachusetts, USA, 26.09.2005 - 30.09.2005)
In: Proceedings of the 2005 IEEE International Conference on Cluster Coomputing, New York: 2005
DOI: 10.1109/CLUSTR.2005.347042
URL: http://www2.informatik.uni-erlangen.de/publication/download/thread_migration.pdf
BibTeX: Download
, :
A quantitative comparison of the subgraph miners MoFa, gSpan, FFSM, and Gaston
9th European Conference on Principles and Practices of Knowledge Discovery in Databases (Porto, Portugal, 03.10.2005 - 07.10.2005)
In: Jorge, Alipio; Torgo, Luis; Brazdil, Pavel; Camacho, Rui; Gama, Joao (ed.): Knowledge Discovery in Database: PKDD 2005, Berlin Heidelberg: 2005
DOI: 10.1007/11564126_39
URL: http://www2.informatik.uni-erlangen.de/publication/download/PKDD05.pdf
BibTeX: Download
, , , :
2004
Latency Reduction in Software-DSMs by Means of Dynamic Function Splicing
16th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS'04) (Cambridge, MA, USA, 09.11.2004 - 11.11.2004)
In: Proceedings of the 16th IASTED International Conference on Parallel and Distributed Computing and Systems, Anaheim, CA, USA: 2004
URL: http://www2.informatik.uni-erlangen.de/publication/download/pdcs04dfs.pdf
BibTeX: Download
, , :
Using Object Combining for Object Prefetching in DSM Systems
11th Workshop on Compilers for Parallel Computers (CPC 2004) (Seeon, 07.07.2004 - 09.07.2004)
In: Gerndt, Michael ; Kereku, Edmond (ed.): Proceedings of the 11th Workshop on Compilers for Parallel Computers (CPC 2004) 2004
URL: http://www2.informatik.uni-erlangen.de/publication/download/cpc2004.ps.gz
BibTeX: Download
, :
2003
A Controlled experiment on inheritance depth as a cost factor for maintenance
In: Journal of Systems and Software 65 (2003), p. 115-126
ISSN: 0164-1212
DOI: 10.1016/S0164-1212(02)00053-5
URL: http://pswt.informatik.uni-erlangen.de/publication/download/jss.pdf
BibTeX: Download
, , , :
Compiler Optimized Remote Method Invocation
5th IEEE Conf. on Cluster Computing (CC 2003) (Hong Kong, 01.12.2003 - 04.12.2003)
In: Proc. 5th IEEE Conf. on Cluster Computing 2003
DOI: 10.1109/CLUSTR.2003.1253308
URL: http://www2.informatik.uni-erlangen.de/publication/download/cc2003.pdf
BibTeX: Download
, :
2002
Internetwahlen: Demokratische Wahlen über das Internet?
In: Informatik-Spektrum 25 (2002), p. 138-150
ISSN: 0170-6012
DOI: 10.1007/s002870200216
URL: http://www2.informatik.uni-erlangen.de/publication/download/wahlen.pdf
BibTeX: Download
:
Finding plagiarisms among a set of programs with JPlag
In: Journal of Universal Computer Science 8 (2002), p. 1016-1038
ISSN: 0948-695X
DOI: 10.3217/jucs-008-11-1016
URL: http://pswt.informatik.uni-erlangen.de/publication/download/jplag.pdf
BibTeX: Download
, , :
Two controlled experiments assessing the usefulness of design pattern documentation during program maintenance
In: IEEE Transactions on Software Engineering 28 (2002), p. 595-606
ISSN: 0098-5589
DOI: 10.1109/TSE.2002.1010061
URL: http://pswt.informatik.uni-erlangen.de/publication/download/patdoc_tse2001.pdf
BibTeX: Download
, , , :
2001
Java and numerical computing
In: Computing in Science & Engineering 3 (2001), p. 18-24
ISSN: 1521-9615
DOI: 10.1109/5992.908997
URL: http://www2.informatik.uni-erlangen.de/publication/download/cise-ron.pdf
BibTeX: Download
, , , :
Java communications for large-scale parallel computing
3rd International Conference on Large-Scale Scientific Computations (Sozopol/Bulgaria, 06.06.2001 - 10.06.2001)
In: Marbenov, S. ; Wasniewski, J. ; Yalamov, P. (ed.): Large-Scale Scientific Computing, Berlin Heidelberg: 2001
DOI: 10.1007/3-540-45346-6_3
URL: http://www2.informatik.uni-erlangen.de/publication/download/scicomp-getov.pdf
BibTeX: Download
, :
Multiparadigm communications in Java for Grid computing
In: Communications of the ACM 44 (2001), p. 118-125
ISSN: 0001-0782
DOI: 10.1145/383845.383872
URL: http://www2.informatik.uni-erlangen.de/publication/download/cacm-getov.pdf
BibTeX: Download
, , , :
Exploiting object locality in JavaParty, a distributed computing environment for workstation clusters
9th Intl. Workshop on Compiler for Parallel Computers (CPC 2001) (Edinburgh, Scotland/UK, 27.06.2001 - 29.06.2001)
In: O'Boyle, Michael ; Fursin, Grigori ; Ashby, Tom ; Franke, Bjoern ; Long, Shun (ed.): Proceedings of the 9th Intl. Workshop on Compiler for Parallel Computers (CPC 2001) 2001
URL: http://www2.informatik.uni-erlangen.de/publication/download/jp-local-calls.pdf
BibTeX: Download
, :
Leistungsaspekte Paralleller Objektorientierter Programmiersprachen (Habilitation, 2001)
BibTeX: Download
:
Verschiedene Realisierungsmöglichkeiten für komplexe Zahlen in Java im Vergleich
In: it + ti - Informationstechnik und Technische Informatik 43 (2001), p. 159-165
ISSN: 0944-2774
DOI: 10.1524/itit.2001.43.3.159
URL: http://www2.informatik.uni-erlangen.de/publication/download/complex-itti.pdf
BibTeX: Download
:
JavaGrande - High Performance Computing with Java
Workshop on Applied Parallel Computing, New Paradigms for HPC in Industry and Academia (Para2000) (Bergen, Norway, 18.06.2000 - 21.06.2000)
In: Sørevik, Tor ; Manne, Fredrik ; Moe, Randi ; Gebremedhin, Assefaw Hadish (ed.): Applied Parallel Computing. New Paradigms for HPC in Industry and Academia, Berlin Heidelberg: 2001
DOI: 10.1007/3-540-70734-4_5
BibTeX: Download
, , , , , , :
2000
Complex numbers for Java
In: Concurrency Practice and Experience 12 (2000), p. 477-491
ISSN: 1040-3108
DOI: 10.1002/1096-9128(200005)12:63.0.CO;2-W
URL: http://www2.informatik.uni-erlangen.de/publication/download/complexe.pdf
BibTeX: Download
, :
A survey on concurrent object-oriented languages
In: Concurrency and Computation-Practice & Experience 12 (2000), p. 917-980
ISSN: 1532-0626
DOI: 10.1002/1096-9128(20000825)12:103.0.CO;2-F
URL: http://www2.informatik.uni-erlangen.de/publication/download/cool-survey.pdf
BibTeX: Download
:
Cooperating distributed garbage collectors for clusters and beyond
In: Concurrency Practice and Experience 12 (2000), p. 595-610
ISSN: 1040-3108
DOI: 10.1002/1096-9128(200005)12:73.0.CO;2-D
URL: http://www2.informatik.uni-erlangen.de/publication/download/dgc.pdf
BibTeX: Download
:
Cooperating distributed garbage collectors for clusters and beyond
8th Workshop on Compilers for Parallel Computers (CPC 2000) (Aussois, France, 04.01.2000 - 07.01.2000)
In: Proceedings of the 8th Workshop on Compilers for Parallel Computers (CPC 2000) 2000
URL: http://www2.informatik.uni-erlangen.de/publication/download/dgc.pdf
BibTeX: Download
:
JavaGrande: Hochleistungsrechnen mit Java
OOP'2000 (München)
In: Proc. Objekt-Orientiertes Programmieren, -: 2000
URL: http://www2.informatik.uni-erlangen.de/publication/download/oop00.pdf
BibTeX: Download
:
JavaGrande - Hochleistungsrechnen mit Java
In: Informatik-Spektrum 23 (2000), p. 79-89
ISSN: 0170-6012
DOI: 10.1007/s002870050153
URL: http://www2.informatik.uni-erlangen.de/publication/download/jgf.pdf
BibTeX: Download
:
Locality optimization in JavaParty by means of static type analysis
In: Concurrency Practice and Experience 12 (2000), p. 613-628
ISSN: 1040-3108
DOI: 10.1002/1096-9128(200007)12:83.0.CO;2-G
URL: http://www2.informatik.uni-erlangen.de/publication/download/static.pdf
BibTeX: Download
, :
More efficient serialization and RMI for Java
In: Concurrency Practice and Experience 12 (2000), p. 495-518
ISSN: 1040-3108
DOI: 10.1002/1096-9128(200005)12:73.0.CO;2-W
URL: http://www2.informatik.uni-erlangen.de/publication/download/serialrmi.pdf
BibTeX: Download
, , :
JPlag: Finding plagiarisms among a set of programs
(2000)
BibTeX: Download
(Techreport)
, , :
1999
Irregular parallel algorithms in Java
6th Int. Workshop on Solving Irregularly Structured Problems in Parallel (San Juan, Puerto Rico, USA, 12.04.1999 - 16.04.1999)
In: Rolim, José (ed.): Parallel and Distributed Processing, Berlin Heidelberg: 1999
DOI: 10.1007/BFb0097988
URL: http://www2.informatik.uni-erlangen.de/publication/download/irregular99.pdf
BibTeX: Download
, , :
Fair multi-branch locking of several locks
In: International Journal of Parallel and Distributed Systems and Networks 2 (1999), p. 17-26
ISSN: 1206-2138
URL: http://www2.informatik.uni-erlangen.de/publication/download/locks.pdf
BibTeX: Download
, :
Java as a basis for parallel data mining in workstation clusters
7th International Conference on High-Performance Computing and Networking (HPCN) (Amsterdam, NL, 12.04.1999 - 14.04.1999)
In: Sloot, P. ; Bubak, M. ; Hoekstra, Alfons G. ; Hertzberger, B. (ed.): Proc. 7th International Conference on High Performance Computing and Networking (HPCN Europe), Berlin Heidelberg: 1999
DOI: 10.1007/BFb0100648
URL: http://www2.informatik.uni-erlangen.de/publication/download/dataminps.pdf
BibTeX: Download
, , , , :
Complex numbers for Java
3rd International Symposium on Computing in Object-Oriented Parallel Environments (ISCOPE'99) (San Francisco/USA, 07.12.1999 - 10.12.1999)
In: Matsuoka, Satoshi ; Oldehoeft, Rodney R. ; Tholburn, Marydell (ed.): Computing in Object-Oriented Parallel Environments, Berlin Heidelberg: 1999
DOI: 10.1007/10704054_1
URL: https://www2.cs.fau.de/publication/download/complexe.pdf
BibTeX: Download
, :
Komplexe Zahlen für Java
Java-Informationstage JIT'99 (Düsseldorf, 20.09.1999 - 21.09.1999)
In: Cap, C. H. (ed.): Java-Informations-Tage JIT'99, Berlin Heidelberg: 1999
DOI: 10.1007/978-3-642-60247-4_24
URL: http://www2.informatik.uni-erlangen.de/publication/download/complexd.pdf
BibTeX: Download
, :
More efficient object serialization
6th Int. Workshop on Solving Irregularly Structured Problems in Parallel (San Jose, Puerto Rico/USA, 12.04.1999 - 16.04.1999)
In: Rolim, José (ed.): Parallel and Distributed Processing, Berlin Heidelberg: 1999
DOI: 10.1007/BFb0097962
URL: https://www2.cs.fau.de/publication/download/serialrmi.pdf
BibTeX: Download
, :
A more efficient RMI
ACM 1999 conference on Java Grande (San Francisco, CA, 12.06.1999 - 14.06.1999)
In: Fox, Geoffrey; Schauser, Klaus; Snir, Marc (ed.): Proceedings of the ACM 1999 conference on Java Grande, New York: 1999
DOI: 10.1145/304065.304117
URL: https://www2.cs.fau.de/publication/download/serialrmi.pdf
BibTeX: Download
, , :
JavaParty: Erfahrungen mit verteiltem und parallelem Programmieren in Java
OOP'99 (München)
In: Paulisch, Frances (ed.): Objekt-Orientiertes Programmieren, -: 1999
URL: http://www2.informatik.uni-erlangen.de/publication/download/oop99.pdf
BibTeX: Download
:
Effizientes RMI für Java
Java-Informations-Tage JIT'99 (Düsseldorf, 20.09.1999 - 21.09.1999)
In: Cap, C. H. (ed.): Java-Informations-Tage JIT'99, Berlin Heidelberg: 1999
DOI: 10.1007/978-3-642-60247-4_13
URL: http://www2.informatik.uni-erlangen.de/publication/download/rmi-german.pdf
BibTeX: Download
, , :
Java Entwicklungsumgebungen im Vergleich
(1999)
URL: http://www2.informatik.uni-erlangen.de/publication/download/idevergl.pdf
BibTeX: Download
(Techreport)
, , :
Iterim Java Grande Forum Report
JGF-TR-4 (1999)
URL: http://www2.informatik.uni-erlangen.de/publication/download/jgf-tr-4-beta.pdf
BibTeX: Download
(Techreport)
, , , , , , , , , , :
1998
RESH - Rechnernetze als Supercomputer und Hochleistungsdatenbanken: Zwischenbericht
(1998)
URL: http://www2.informatik.uni-erlangen.de/publication/download/1998-23.pdf
BibTeX: Download
(Techreport)
, , , , , :
CeBIT'98 brachte wertvolle Kontakte zur Industrie
In: UNIKATH-Neues vom Campus der Universität Karslruhe, 1998, p. 24-25
BibTeX: Download
(Techreport)
, :
Seminarbeiträge Cache-Optimierung
(1998)
URL: http://www2.informatik.uni-erlangen.de/publication/download/1998-05.pdf
BibTeX: Download
(Techreport)
, , :
Large-scale parallel geophysical algorithms in Java: A feasibility study
In: Concurrency and Computation-Practice & Experience 10 (1998), p. 1143-1154
ISSN: 1532-0626
DOI: 10.1002/(SICI)1096-9128(199809/11)10:11/133.0.CO;2-W
URL: http://www2.informatik.uni-erlangen.de/publication/download/veltran.pdf
BibTeX: Download
, , :
Large-scale parallel geophysical algorithms in Java: A feasibility study
In: Leading Edge (Tulsa, OK) 17 (1998), p. 1662-1666
ISSN: 1070-485X
URL: https://www2.cs.fau.de/publication/download/tle.pdf
BibTeX: Download
, , :
Large-scale parallel geophysical algorithms in Java: A feasibility study
ACM Workshop on Java for High-Performance Network Computing (Palo Alto, CA, 28.02.1998 - 01.03.1998)
In: Fox, Geoffrey C. (ed.): Proc. of the ACM Workshop on Java for High-Performance Network Computing, New York: 1998
URL: http://www2.informatik.uni-erlangen.de/publication/download/veltran.pdf
BibTeX: Download
, , :
Large-scale parallel geophysical algorithms in Java: A feasibility study
1998 (1998), p. 32
BibTeX: Download
(Techreport)
, , :
Parallelizing large-scale geophysical algorithms in Java
Fourth International Conference on Mathematical and Numerical Aspects of Wave Propagation (Waves'98) (Golden, Colorado/USA, 01.06.1998 - 05.06.1998)
In: De Santro, John A. ; Cohen, Gary (ed.): Fourth International Conference on Mathematical and Numerical Aspects of Wave Propagation (Waves'98), Philadelphia PA: 1998
URL: http://www2.informatik.uni-erlangen.de/publication/download/siamWP98.pdf
BibTeX: Download
, , :
Data parallelism in Java
Proc. 12th Int. Symposium on High Performance Computing Systems and Applications (HPCS'98) (Edmonton, Canada, 20.05.1998 - 22.05.1998)
In: Schaefer, J. (ed.): High Performance Computing Systems and Applications, New York: 1998
DOI: 10.1007/978-1-4615-5611-4_11
URL: https://www2.cs.fau.de/publication/download/forall.pdf
BibTeX: Download
:
Is Java ready for computational science?
2nd European Parallel and Distributed Systems Conference (Euro-PDS'98) (Vienna/Austria, 01.07.1998 - 03.07.1998)
In: Bukhres, O. ; El-Rewini, H. (ed.): 2nd European Parallel and Distributed Systems Conference 1998
URL: http://www2.informatik.uni-erlangen.de/publication/download/javaCS.pdf
BibTeX: Download
:
Locality optimization in JavaParty by means of static type analysis
7th International Workshop on Compilers for Parallel Computers (CPC 1998) (Linköping, 29.06.1998 - 01.07.1998)
In: Fritzson, Peter (ed.): Proceedings of the 7th International Workshop on Compilers for Parallel Computers (CPC 1998) 1998
BibTeX: Download
, :
Locality optimization in JavaParty by means of static type analysis
First UK Workshop on Java for High Performance Network Computing (Southampton/UK, 02.09.1998 - 03.09.1998)
In: Pritchard, David ; Reeve, Jeff (ed.): First UK Workshop on Java for High Performance Network Computin 1998
URL: https://www2.cs.fau.de/publication/download/static.pdf
BibTeX: Download
, :
Fallstudie: Parallele Realisierung geophysikalischer Basisalgorithmen in Java
In: Informatik - Forschung und Entwicklung 13 (1998), p. 72-78
ISSN: 0178-3564
DOI: 10.1007/s004500050099
URL: http://www2.informatik.uni-erlangen.de/publication/download/fallstudie.pdf
BibTeX: Download
, , :
JavaParty - portables paralles und verteiltes Programmieren in Java
Java-Informations-Tage (JIT'98) (Frankfurt/Main, 12.11.1998 - 13.11.1998)
In: Cap, C. H. (ed.): Java-Informations-Tage, Berlin Heidelberg: 1998
DOI: 10.1007/978-3-642-59984-2_3
URL: http://www2.informatik.uni-erlangen.de/publication/download/partyd.pdf
BibTeX: Download
, , :
Java Grande Forum Report : Making Java work for high-end computing
JGF-TR-1 (1998)
URL: http://www2.informatik.uni-erlangen.de/publication/download/sc98grande.pdf
BibTeX: Download
(Techreport)
, , , , , , , , , , :
Forschungsprojekte des Lehrstuhls für Programmiersysteme der Universität Karlsruhe (TH)
In: Informatik - Forschung und Entwicklung 13 (1998), p. 93-96
ISSN: 0178-3564
DOI: 10.1007/s004500050101
URL: http://www2.informatik.uni-erlangen.de/publication/download/jahr1997.pdf
BibTeX: Download
, , , :
1997
Fair multi-branch locking of several locks
IASTED Intl. Conf. on Parallel and Distributed Computing and Systems (PDCS) (Washington D.C./USA, 13.10.1997 - 16.10.1997)
In: Li, K. ; Olariu, S. ; Pan, Y. ; Stojmenovic, I. (ed.): Proceedings of the 1997 IASTED Intl. Conf. on Parallel and Distributed Computing and Systems (PDCS) 1997
URL: https://www2.cs.fau.de/publication/download/locks.pdf
BibTeX: Download
, :- Philippsen Michael, Zenger M.:
JavaParty: Transparent remote objects in Java
Symposium on Principles and Practice of Parallel Programming, Workshop on Java for Science and Engineering Computation (Las Vegas, NV, 21.06.1997 - 21.06.1997)
In: Symposium on Principles and Practice of Parallel Programming, Workshop on Java for Science and Engineering Computation 1997
URL: http://www2.informatik.uni-erlangen.de/publication/download/party.pdf
BibTeX: Download
JavaParty: Transparent remote objects in Java
In: Concurrency Practice and Experience 9 (1997), p. 1225-1242
ISSN: 1040-3108
DOI: 10.1002/(SICI)1096-9128(199711)9:113.0.CO;2-F
URL: http://www2.informatik.uni-erlangen.de/publication/download/party.pdf
BibTeX: Download
, :
Documenting design patterns in code eases program maintenance
Workshop on Process Modelling and Empirical Studies of Software Evolution at the International Conference on Software Engineering (ICSE) (Boston/USA, 18.05.1997 - 18.05.1997)
In: Harrison, Rachel ; Shepperd, Martin ; Daly, John W. (ed.): ICSE Workshop - Process Modeling and Empirical Studies of Software Evolution 1997
URL: http://pswt.informatik.uni-erlangen.de/publication/download/jakk_pmesse97.pdf
BibTeX: Download
, , :
1996
Java Seminarbeiträge
(1996)
URL: http://www2.informatik.uni-erlangen.de/publication/download/1996-24.pdf
BibTeX: Download
(Techreport)
:
1995
Automatic alignment of array data and processes to reduce comminication time on DMPPs
Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'95) (Santa Barbara/CA/USA, 19.07.1995 - 21.07.1995)
In: Wexelblat, Richard L. (ed.): Proceedings of the 5th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'95), New York: 1995
DOI: 10.1145/209936.209953
URL: http://www2.informatik.uni-erlangen.de/publication/download/Alignment-2.pdf
BibTeX: Download
:
Enabling compiler transformations for pSather
(1995)
URL: https://www2.informatik.uni-erlangen.de/publication/download/EnablingTrafos.pdf
BibTeX: Download
(Techreport)
:
Imperative concurrent object-oriented languages: An annotated bibliography
(1995)
URL: http://www2.informatik.uni-erlangen.de/publication/download/COOL-bib.pdf
BibTeX: Download
(Techreport)
:
Imperative concurrent object-oriented languages
(1995)
URL: https://www2.informatik.uni-erlangen.de/publication/download/cool-survey.pdf
BibTeX: Download
(Techreport)
:
Automatic synchronization elimination in synchronous FORALLs
The Fifth Symposium on the Frontiers of Massively Parallel Computation (Frontiers'95) (Mc Lean, VA/USA, 06.02.1995 - 09.02.1995)
In: Frontiers '95 : The Fifth Symposium on the Frontiers of Massively Parallel Computation, Los Alamitos, CA: 1995
DOI: 10.1109/FMPC.1995.380435
URL: https://www2.cs.fau.de/publication/download/front95.pdf
BibTeX: Download
, :
1994
Automatic alignment of array data and processes to reduce communication time on DMPPs
(1994)
URL: http://www2.informatik.uni-erlangen.de/publication/download/Alignment-2.pdf
BibTeX: Download
(Techreport)
:
Optimierungstechniken zur Übersetzung paralleler Programmiersprachen (Dissertation, 1994)
BibTeX: Download
:
Sather 1.0 tutorial
(1994)
URL: http://www2.informatik.uni-erlangen.de/publication/download/SatherTutorial.pdf
BibTeX: Download
(Techreport)
:
Data and process alignment in Modula-2*
Intl. Workshop on Automatic Parallelization (AP'93) (Saarbrücken, Germany, 01.03.1993 - 03.03.1993)
In: Kessler, C.W. (ed.): Automatic Parallelization - New Approaches to Code Generation, Data Distribution, and Performance Prediction, Wiesbaden: 1994
DOI: 10.1007/978-3-322-87865-6_10
URL: http://www2.informatik.uni-erlangen.de/publication/download/Alignment-1.pdf
BibTeX: Download
, :
Project Triton: Towards improved programmability of parallel computers
In: David J. Lilja ; Peter L. Bird (ed.): The Interaction of Compilation Technology and Computer Architecture, Boston, Dordrecht, London: Kluwer Academic Publishers, 1994, p. 249-281
ISBN: 978-1-4613-6154-1
DOI: 10.1007/978-1-4615-2684-1_10
URL: http://www2.informatik.uni-erlangen.de/publication/download/Triton.pdf
BibTeX: Download
, , , , , :
Zur programmiertechnischen Beherrschung von massivem Parallelismus
Softwareentwicklung für Supercomputer (Karlsruhe, Germany, 03.03.1994 - 04.03.1994)
In: Schreiner, A. ; Schnepf, E. (ed.): Drittes ODIN Symposium, Rechenzentrum Universität Karlsruhe 1994
BibTeX: Download
, , , , , , :
1993
Synchronization barrier elimination in synchronous FORALLs
(1993)
URL: https://www2.cs.fau.de/publication/download/front95.pdf
BibTeX: Download
(Techreport)
, :
Triton/1: A massively-parallel mixed-mode computer designed to support high level languages
2nd Workshop on Heterogeneous Processing (WHP 93) (Newport Beach/CA/USA, 13.04.1993 - 16.04.1993)
In: Proceedings of the 2nd Workshop on Heterogeneous Processing (WHP 93), New York: 1993
DOI: 10.1109/WHP.1993.664368
URL: http://www2.informatik.uni-erlangen.de/publication/download/triton-hp.pdf
BibTeX: Download
, , , :
The Modula-2* environment for parallel programming
Conference on Massively Parallel Programming Models (MPPM'93) (Berlin, Germany, 20.09.1993 - 23.09.1993)
In: Giloi, Wolfgang K. (ed.): Programming Models for Massively Parallel Computers, Los Alamitos: 1993
DOI: 10.1109/PMMP.1993.315555
URL: http://www2.informatik.uni-erlangen.de/publication/download/mppm93.pdf
BibTeX: Download
, , , , :
The Modula-2* environment for parallel programming
3rd International Workshop on Compilers for Parallel Computers (CPC 1993) (Delft, NL, 13.12.1993 - 16.12.1993)
In: Henk Sips (ed.): Proceedings of the 3rd International Workshop on Compilers for Parallel Computers (CPC 1993) 1993
URL: http://www2.informatik.uni-erlangen.de/publication/download/mppm93.pdf
BibTeX: Download
, , , , :
Compiling machine-independent parallel programs
In: Acm SIGPLAN Notices 28 (1993), p. 99-108
ISSN: 0362-1340
DOI: 10.1145/163114.163127
URL: http://www2.informatik.uni-erlangen.de/publication/download/sigplan93.pdf
BibTeX: Download
, , :
Compiling machine-independent parallel programs
(1993)
URL: https://www2.informatik.uni-erlangen.de/publication/download/sigplan93.pdf
BibTeX: Download
(Techreport)
, , :
Project Triton: Towards improved programmability of parallel machines
26th Hawaii International Conference on System Sciences (HICSS) (Wailea, Maui, Hawaii/USA, 04.01.1993 - 08.01.1993)
In: Proceedings of the 26th Hawaii International Conference on System Sciences (HICSS), New York: 1993
DOI: 10.1109/HICSS.1993.270745
URL: https://www2.cs.fau.de/publication/download/Triton.pdf
BibTeX: Download
, , , :
Programming parallel supercomputers
Joint international conference on mathematical methods and supercomputing in nuclear applications (M and C and SNA '93) (Karlsruhe, 19.04.1993 - 23.04.1993)
In: Küster, H. ; Stein, E. ; Werner, W. (ed.): Proceedings of the International Conference on Mathematical Methods and Supercomputing in Nuclear Applications 1993
URL: http://www2.informatik.uni-erlangen.de/publication/download/nuclear.pdf
BibTeX: Download
, :
1992
Automatic data distribution for nearest neighbor networks
Fourth Symposium on the Frontiers of Massively Parallel Computation (Frontiers '92) (Mc Lean, VA, USA, 19.10.1992 - 21.10.1992)
In: Proceedings of the Fourth Symposium on the Frontiers of Massively Parallel Computation (Frontiers '92) 1992
DOI: 10.1109/FMPC.1992.234890
URL: http://www2.informatik.uni-erlangen.de/publication/download/front92.pdf
BibTeX: Download
:
Compiling for massively parallel machines
International Workshop on Code Generation (Dagstuhl Castle, Germany, 20.05.1991 - 24.05.1991)
In: Giegerich, Robert ; Graham, Susan L. (ed.): Code Generation - Concepts, Tools, Techniques, Berlin Heidelberg: 1992
DOI: 10.1007/978-1-4471-3501-2_6
URL: http://www2.informatik.uni-erlangen.de/publication/download/dagstuhl.pdf
BibTeX: Download
, :
Modula-2* and its compilation
First International ACPC (Austrian Center for Parallel Computation) Conference (Salzburg, Austria, 30.09.1991 - 02.10.1991)
In: Zima, Hans P. (ed.): Parallel Computation: First International ACPC Conference, Berlin Heidelberg: 1992
DOI: 10.1007/3-540-55437-8_79
URL: http://www2.informatik.uni-erlangen.de/publication/download/salzburg.pdf
BibTeX: Download
, :
Project: Triton: Towards improved programmability of parallel machines
(1992)
URL: https://www2.cs.fau.de/publication/download/Triton.pdf
BibTeX: Download
(Techreport)
, , , :
Projekt Triton: Beiträge zur Verbesserung der Programmierbarkeit hochparalleler Rechensysteme
In: Informatik - Forschung und Entwicklung 7 (1992), p. 1-13
ISSN: 0178-3564
URL: http://www2.informatik.uni-erlangen.de/publication/download/d-triton.pdf
BibTeX: Download
, , , :
A critique of the programming language C*
In: Communications of the ACM 35 (1992), p. 21-24
ISSN: 0001-0782
DOI: 10.1145/129888.376122
URL: http://www2.informatik.uni-erlangen.de/publication/download/CstarCritique.pdf
BibTeX: Download
, , :
From Modula-2* to efficient parallel code
(1992)
URL: https://www2.cs.fau.de/publication/download/wien.pdf
BibTeX: Download
(Techreport)
, , , :
From Modula-2* to efficient parallel code
3rd International Workshop on Compilers for Parallel Computers (CPC 1992) (Vienna, Austria, 06.07.1992 - 09.07.1992)
In: Zima, Hans P. (ed.): Proceedings of the 3rd International Workshop on Compilers for Parallel Computers (CPC 1992) 1992
URL: http://www2.informatik.uni-erlangen.de/publication/download/wien.pdf
BibTeX: Download
, , , :
1991
Erfahrungen mit der MasPar MP-1
Conference on Supercomputing and Applications (Bochum)
In: Ehlich, Hartmut ; Schloßer, Karl-Heinz ; Wojcieszynski, Brigitte (ed.): Bochumer Schriften zur Parallelen Datenverarbeitung, Bochum: 1991
URL: http://www.philippsen.com/mypapers/bib/I001.bib
BibTeX: Download
:
Modula-2* and its compilation
(1991)
URL: http://www2.informatik.uni-erlangen.de/publication/download/salzburg.pdf
BibTeX: Download
(Techreport)
, , :
Hochgradiger Parallelismus
(1991)
URL: http://www2.informatik.uni-erlangen.de/publication/download/d-hochparallel.pdf
BibTeX: Download
(Techreport)
, :
Hochgradiger Parallelismus
(1991)
URL: http://www2.informatik.uni-erlangen.de/publication/download/d-hochparallel.pdf
BibTeX: Download
(Techreport)
, :
A critique of the programming language C*
(1991)
URL: http://www2.informatik.uni-erlangen.de/publication/download/CstarCritique.pdf
BibTeX: Download
(Techreport)
, , :
1990
The Triton project
(1990)
BibTeX: Download
(Techreport)
, , :
Ausgewählte Kapitel aus dem Übersetzerbau
Basic data
Title | Ausgewählte Kapitel aus dem Übersetzerbau |
---|---|
Short text | inf2-ueb3 |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Es ist keine Anmeldung erforderlich.
Parallel groups / dates
In der Vorlesung werden Aspekte des Übersetzerbaus beleuchtet, die über die Vorlesungen "Grundlagen des Übersetzerbaus" und "Optimierungen in Übersetzern" hinausgehen.
Voraussichtliche Themen sind:
- Übersetzer u. Optimierungen für funktionale Programmiersprachen
- Übersetzung aspektorientierter Programmiersprachen
- Erkennung von Wettlaufsituationen
- Software Watermarking
- Statische Analyse und symbolische Ausführung
- Binden von Objektcode und Unterstützung für dynamische Bibliotheken
- Strategien zur Ausnahmebehandlung
- Just-in-Time-Übersetzer
- Speicherverwaltung und Speicherbereinigung
- LLVM
Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner Tobias Heineken Florian Mayer Daniela Novac |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Wed, 08:15 - 09:45 | 16.10.2024 - 05.02.2025 | 25.12.2024 01.01.2025 |
|
Grundlagen des Übersetzerbaus
Basic data
Title | Grundlagen des Übersetzerbaus |
---|---|
Short text | inf2-ueb |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Voraussetzung zur Teilnahme an der Modulprüfung ist die erfolgreiche Bearbeitung der Übungsaufgaben.
Parallel groups / dates
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Florian Mayer Prof. Dr. Michael Philippsen Tobias Heineken |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Thu, 08:15 - 09:45 | 17.10.2024 - 06.02.2025 | 02.01.2025 26.12.2024 |
|
Parallele und Funktionale Programmierung
Basic data
Title | Parallele und Funktionale Programmierung |
---|---|
Short text | PFP |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Parallel groups / dates
Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Dr.-Ing. Norbert Oster |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Tue, 12:15 - 13:45 | 15.10.2024 - 04.02.2025 | 24.12.2024 31.12.2024 |
|
Machine Learning: Advances
Basic data
Title | Machine Learning: Advances |
---|---|
Short text | SemML-II |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.
Parallel groups / dates
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German or English |
Responsible |
Prof. Dr. Michael Philippsen Tobias Feigl |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
nach Vereinbarung - | - |
|
|||
Einzeltermin Thu, 14:00 - 15:00 | 10.10.2024 - 10.10.2024 | ||||
Blockveranstaltung+Sa Sat, 09:00 - 16:00 | 04.01.2025 - 29.03.2025 | 06.01.2025 |
Machine Learning: Introduction
Basic data
Title | Machine Learning: Introduction |
---|---|
Short text | SemML-I |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.
Parallel groups / dates
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German or English |
Responsible |
Prof. Dr. Michael Philippsen Tobias Feigl |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
nach Vereinbarung - | - |
|
|||
Einzeltermin Thu, 14:00 - 15:00 | 10.10.2024 - 10.10.2024 | 11302.04.150 | |||
Blockveranstaltung+Sa Sat, 09:00 - 16:00 | 04.01.2025 - 29.03.2025 | 06.01.2025 |
Begleitseminar zu Bachelor- und Masterarbeiten
Basic data
Title | Begleitseminar zu Bachelor- und Masterarbeiten |
---|---|
Short text | inf2-bs-bama |
Module frequency | in jedem Semester |
Semester hours per week | 3 |
Parallel groups / dates
1. Parallelgruppe
Semester hours per week | 3 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Mon, 12:15 - 13:45 | 14.10.2024 - 03.02.2025 | 23.12.2024 06.01.2025 30.12.2024 |
|
11302.04.150 |
Übungen zu Ausgewählte Kapitel aus dem Übersetzerbau
Basic data
Title | Übungen zu Ausgewählte Kapitel aus dem Übersetzerbau |
---|---|
Short text | inf2-ueb3-ex |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Blockveranstaltung n.V. nach der Vorlesungszeit.
Parallel groups / dates
Die Übungen zu Übersetzerbau 3 stellen eine Ergänzung zur Vorlesung dar. In der Vorlesung wird unter anderem die Architektur und Funktionsweise einer virtuellen Maschine beleuchtet. In den Übungen soll dies praktisch umgesetzt werden. Hierzu sollen die Studenten in einer Blockveranstaltung eine kleine virtuelle Maschine selbst implementieren. Den Anfang bildet das Einlesen des Byte-Codes und am Ende soll ein funktionsfähiger optimierender Just-in-Time-Übersetzer entstehen.
Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Tobias Heineken Florian Mayer Julian Brandner |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
nach Vereinbarung - | - |
|
Übungen zu Grundlagen des Übersetzerbaus
Basic data
Title | Übungen zu Grundlagen des Übersetzerbaus |
---|---|
Short text | inf2-ueb-ex |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Parallel groups / dates
Im Rahmen der Übungen werden die in der Vorlesung vorgestellten Konzepte und Techniken zur Implementierung eines Übersetzers in die Praxis umgesetzt. Ziel der Übungen ist es, bis zum Ende des Semesters einen funktionsfähigen Übersetzer für die Beispiel-Programmiersprache e2 zu implementieren. Die hierfür nötigen zusätzlichen Kenntnisse (z.B. Grundlagen des Assemblers für x86-64) werden in den Tafelübungen vermittelt. Die im Laufe des Semesters zu erreichenden Meilensteine sind im StudOn-Eintrag der Vorlesung aufgelistet. Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Tobias Heineken Prof. Dr. Michael Philippsen Florian Mayer |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Mon, 14:15 - 15:45 | 14.10.2024 - 03.02.2025 | 06.01.2025 23.12.2024 30.12.2024 |
Im Rahmen der Übungen werden die in der Vorlesung vorgestellten Konzepte und Techniken zur Implementierung eines Übersetzers in die Praxis umgesetzt. Ziel der Übungen ist es, bis zum Ende des Semesters einen funktionsfähigen Übersetzer für die Beispiel-Programmiersprache e2 zu implementieren. Die hierfür nötigen zusätzlichen Kenntnisse (z.B. Grundlagen des Assemblers für x86-64) werden in den Tafelübungen vermittelt. Die im Laufe des Semesters zu erreichenden Meilensteine sind im StudOn-Eintrag der Vorlesung aufgelistet. Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.
2. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Tobias Heineken Florian Mayer |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Fri, 08:15 - 09:45 | 18.10.2024 - 07.02.2025 | 01.11.2024 27.12.2024 03.01.2025 |
Im Rahmen der Übungen werden die in der Vorlesung vorgestellten Konzepte und Techniken zur Implementierung eines Übersetzers in die Praxis umgesetzt. Ziel der Übungen ist es, bis zum Ende des Semesters einen funktionsfähigen Übersetzer für die Beispiel-Programmiersprache e2 zu implementieren. Die hierfür nötigen zusätzlichen Kenntnisse (z.B. Grundlagen des Assemblers für x86-64) werden in den Tafelübungen vermittelt. Die im Laufe des Semesters zu erreichenden Meilensteine sind im StudOn-Eintrag der Vorlesung aufgelistet. Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.
3. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Tobias Heineken Prof. Dr. Michael Philippsen Florian Mayer |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Fri, 10:15 - 11:45 | 18.10.2024 - 07.02.2025 | 27.12.2024 03.01.2025 01.11.2024 |
|
Übungen zu Parallele und Funktionale Programmierung
Basic data
Title | Übungen zu Parallele und Funktionale Programmierung |
---|---|
Short text | UePFP |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Parallel groups / dates
10. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Tue, 12:00 - 14:00 | 15.10.2024 - 04.02.2025 | 24.12.2024 31.12.2024 |
14201.00.001 |
4. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Tue, 12:15 - 13:45 | 15.10.2024 - 04.02.2025 | 24.12.2024 31.12.2024 |
11302.02.133 |
11. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Tue, 16:00 - 18:00 | 15.10.2024 - 04.02.2025 | 24.12.2024 31.12.2024 |
14201.00.001 |
14. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Wed, 10:00 - 12:00 | 16.10.2024 - 05.02.2025 | 01.01.2025 25.12.2024 |
11302.00.156 |
9. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Mon, 16:00 - 18:00 | 14.10.2024 - 03.02.2025 | 06.01.2025 30.12.2024 23.12.2024 |
14201.00.001 |
5. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Fri, 12:15 - 13:45 | 18.10.2024 - 07.02.2025 | 01.11.2024 27.12.2024 20.12.2024 03.01.2025 |
11302.02.133 |
8. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Thu, 14:00 - 16:00 | 17.10.2024 - 06.02.2025 | 26.12.2024 02.01.2025 |
11302.00.153 |
6. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Thu, 16:15 - 17:45 | 17.10.2024 - 06.02.2025 | 26.12.2024 02.01.2025 |
11302.02.133 |
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Wed, 12:15 - 13:45 | 16.10.2024 - 05.02.2025 | 01.01.2025 25.12.2024 |
11302.02.133 |
2. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Wed, 08:15 - 09:45 | 16.10.2024 - 05.02.2025 | 25.12.2024 01.01.2025 |
11302.02.133 |
12. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Wed, 14:00 - 16:00 | 16.10.2024 - 05.02.2025 | 01.01.2025 25.12.2024 |
14201.00.001 |
13. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Prof. Dr. Michael Philippsen Julian Brandner David Schwarzbeck |
Maximum number of participants: 25
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Thu, 16:00 - 18:00 | 17.10.2024 - 06.02.2025 | 26.12.2024 02.01.2025 |
14201.00.001 |
7. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Wed, 16:15 - 17:45 | 16.10.2024 - 05.02.2025 | 25.12.2024 01.01.2025 |
11302.02.133 |
15. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German |
Responsible |
Julian Brandner Prof. Dr. Michael Philippsen David Schwarzbeck |
Maximum number of participants: 40
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
wöchentlich Tue, 10:15 - 11:45 | 15.10.2024 - 04.02.2025 | 31.12.2024 24.12.2024 |
- (Secondary Application: WO2013091908)
Inventor(s): , , - (Secondary Application: WO2013091907)
Inventor(s): , , - (Priority Patent Application: DE102011089180)
Inventor(s): , , - (Priority Patent Application: DE102011089181)
Inventor(s): , , - (Priority Patent Application: EP2469496 (EP10196851))
Inventor(s): ,
- 30th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS) at 39th IEEE Intl. Parallel + Distributed Processing Symposium, IPDPS 2025, Milan, Italy, June 3-6, 2024
- PASA 2020, 14th Workshop on Parallel Systems and Algorithms, at the International Conference on Architecture of Computing Systems (ARCS 2020), Aachen, Germany, May 25-28, 2020, PC member
- 5th Workshop on Artifical Intelligence and Empirical Methods for Software Engineering and Parallel Computing Systems (AI-SEPS), co-located with ACM Conf. on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH 2018), Boston, MA, November 04-09, 2018, PC member
- PASA 2018, 13th Workshop on Parallel Systems and Algorithms, at the International Conference on Architecture of Computing Systems (ARCS 2018), Braunschweig, Germany, April 9-12, 2018, PC member
- 4th Workshop on Software Engineering for Parallel Systems (SEPS), co-located with ACM Conf. on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH 2016), Vancouver, Canada, October 22-27, 2017, PC member
- 11th ACM International Conference on Distributed and Event-Based Systems (DEBS 2017), Barcelona, Spain, June 19-23, 2017, PC member
- 6th Intl. Workshop on Multicore Software Engineering (IWMSE17) at Euro-Par 2017, 23rd Intl. Europ. Conf. on Parallel and Distributed Computing, Santiago de Compostela, Spain, August 28-29, 2017, PC member
- 27. PARS-Workshop 2017, Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware, Hagen, Germany, May 4-5, 2017, PC member
- 3rd Workshop on Software Engineering for Parallel Systems (SEPS), co-located with ACM Conf. on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH 2016), Amsterdam, The Netherlands, October 30 – November 4, 2016, PC member
- 5th Intl. Workshop on Multicore Software Engineering (IWMSE16) at Euro-Par 2016, 22nd Intl. Europ. Conf. on Parallel and Distributed Computing, Grenoble, France, August 22-26, 2016, PC member
- PASA 2016, 12th Workshop on Parallel Systems and Algorithms, at the International Conference on Architecture of Computing Systems (ARCS 2016), Nürnberg, Germany, April 4-5, 2016, PC member
- Intl. Conf. on Multicore Software Engineering, Performance, and Tools (MUSEPAT 2016), at the 31rd ACM/SIGAPP Symposium On Applied Computing, Pisa, Italy, April 4-8, 2016, PC member and Steering Committee member
- 2nd Workshop on Software Engineering for Parallel Systems (SEPS), co-located with ACM Conf. on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH 2015), Pittsburgh, PA, October 25-30, 2015, PC member
- 26. PARS-Workshop 2015, Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware, Potsdam, Germany, 7.-8. Mai 2015, PC member
- Intl. Conf. on Multicore Software Engineering, Performance, and Tools (MUSEPAT 2015), at the 30th ACM/SIGAPP Symposium On Applied Computing, Salamanca, Spain, April 13-17, 2015, PC member and Steering Committee member
- 1st Workshop on Software Engineering for Parallel Systems (SEPS), co-located with ACM Conf. on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH 2014), Portland, OR, October 21, 2014, PC member
- PASA 2014, 11th Workshop on Parallel Systems and Algorithms, Lübeck, Germany, Feb. 25-28, 2014
- IPDPS’14, 28th IEEE Intl. Parallel + Distributed Processing Symposium, Phoenix, AZ, May 19-23, 2014, PC member
- Intl. Conf. on Multicore Software Engineering, Performance, and Tools (MUSEPAT 2013), Saint Petersburg, Russia, Augsut, 21-23, PC and Steering-Committee member
- 25. PARS – Workshop, Erlangen, April 11-12, 2013, Co-Chair
- Workshop on Parallel and Distributed Programming at Euro-Par 2013, Aachen, Germany, August 26-30, 2013, Local PC chair
- IPDPS’13, 27th IEEE Intl. Parallel + Distributed Processing Symposium, Boston, MA, May 20-24, 2013, PC member
- ISMM’13, ACM Intl. Symp. on Memory Management, Seattle, WA, June 20-21, 2013, PC member
- MC’12, Facing the Multicore-Challenge III, September 19-21, 2012, PC member
- MSEPT’12, International Conference on Multicore Software Engineering, Performance, and Tools, at Tools 2012, Prague, Czech Republic, May 31, 2012, Organizer, PC Co-Chair
- PASA 2012, 10th Workshop on Parallel Systems and Algorithms, Munich, Germany, Feb. 28-29, 2012
- PADTAD IX, Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging at the Intl. Symp. on Software Testing and Analysis, Toronto, Canada, July 17, 2011
- 24. PARS – Workshop, Zürich, May 26-27, 2011
- 4th Intl. Systems and Storage Conference, SYSTOR 2011, Haifa, Israel, May 30-June 1, 2011
- 3rd Intl. Workshop on Multicore Software Engineering (IWMSE10) at 32st Intl. Conf. on Software Engineering (ICSE), Cape Town, South Africa, May 03, 2010, Organizer, PC Co-Chair
- 4th Intl. Workshop on Multicore Software Engineering (IWMSE11) at 33rd Intl. Conf. on Software Engineering (ICSE), Waikiki, Honolulu, Hawaii, May 21, 2011, Organizer, PC Co-Chair
- 2nd Intl. Workshop on Multicore Software Engineering (IWMSE09) at 31st Intl. Conf. on Software Engineering (ICSE), Vancouver, Canada, May 18, 2009
- IEEE International Parallel and Distributed Processing Symposium, IPDPS 2008, Miami, FL, April 14-18, 2008
- 3rd Intl. ACM SIGPLAN/SIGOPS Conf. on Virtual Execution Environments, VEE 2007, San Diego, CA, June 13-15, 2007
- 2006 Intl. Symp. on Parallel and Distributed Processing and Applications, ISPA 2006, Sorrento, Italy, Dec. 1-4, 2006
- 2006 High Performance Computing and Simulation Conference,
Bonn, May 28-30, 2006 - 5th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT’05), Athens, Greece, December 18-21, 2005
- International Conference on Parallel and Distributed Computing and Networks, PDCN 2005, Innsbruck, Austria, February 15-17, 2005
- International Conference on Compiler Construction, CC’05, Edinburgh, Scotland, April 4-5, 2005
- International Conference on High Performance Computing, Hyderabad, India, December 17-20, 2003
- Workshop on High-Performance Object-Oriented and Middleware Systems at Euro-Par 2003, Klagenfurt, Austria, August 26-29, 2003, Local PC Chair
- 5th International Workshop on Java for Parallel and Distributed Computing at IPDPS 2003, Nice, France, April 22-26, 2003
- Workshop on Parallel Programming: Models, Methods, and Programming Languages at Euro-Par 2002, Paderborn, Germany, August 27-30, 2002
- Fourth Workshop on Java for High-Performance Computing in conjunction with ACM International Conference on Supercomputing, ICS ’02, New York, USA, June 22-26, 2002
- 4th International Workshop on Java for Parallel and Distributed Computing at IPDPS 2002, Fort Lauderdale, USA, April 15-19, 2002
- Tenth IEEE International Symposium on High Performance Distributed Computing (HPDC-10), San Francisco, California, August 7-10, 2001
- Workshop on Object Oriented Architectures, Tools and Applications at Euro-Par 2001, Manchester, UK, August 28-31, 2001
- NET.OBJECTDAYS 2001, (Nachfolge JIT, DJEK, STIJA), Erfurt, Germany, September 10-13, 2001
- Workshop: Java in High Performance Computing at HPCN-Europe’01, Amsterdam, June 25-27, 2001
- Third Workshop on Java for High-Performance Computing in conjunction with ACM International Conference on Supercomputing, ICS ’01, Sorrento, Italy, June 16-21, 2001
- ACM 2001 Java Grande Conference, Stanford, California, June 2-4, 2001, PC Chair
- 3rd International Workshop on Java for Parallel and Distributed Computing at IPDPS 2001, San Francisco, California, April 23-27, 2001
- Workshop on Object Oriented Architectures, Tools and Applications at Euro-Par 2000, Munich, Germany, August 29-September 1, 2000, Local PC Chair
- NET.OBJECTDAYS 2000, (Nachfolge JIT, DJEK, STIJA), Erfurt, Germany, October 10-12, 2000
- ACM 2000 Java Grande Conference, San Francisco, California, June 3-4, 2000
- 2nd International Workshop on Java for Parallel and Distributed Computing at IPDPS 2000, Cancun, Mexico, May 1-5, 2000
- Workshop: Java in High Performance Computing at HPCN-Europe’00, Amsterdam, May 8-10, 2000
- Workshop on Java for High Performance Computing at
ICS Supercomputing 2000, Santa Fe, New Mexico, May 7, 2000 - TOOLS USA’99 (Technology of Object-Oriented Languages and Systems), Santa Barbara, California, August 2-6, 1999
- JIT’99 Java-Informations-Tage 1999, Düsseldorf, Germany, September 20-21, 1999
- Tutorial and Workshop on Java for High-Performance Computing at Supercomputing 99, Rhodes, Greece, June 19-20, 1999
- ACM 1999 Java Grande Conference, San Francisco, California, June 12-14, 1999
- Workshop: Java in High Performance Computing at HPCN-Europe’99, RAI Conference Center, Amsterdam, April 12-14, 1999
- International Workshop on Java for Parallel and Distributed Computing at IPDPS 1999 San Juan, Puerto Rico, April 12-16, 1999
- Asia Pacific Web Conference (APWeb98), Beijing, P.R. China, September 27-30, 1998
- TOOLS USA’98 (Technology of Object-Oriented Languages and Systems), Santa Barbara, California, August 3-7, 1998
- ACM 1998 Workshop on Java for High-Performance Network Computing, Palo Alto, California, February 28-March 1, 1998
Current:
- Vertrauensdozent der GI in Erlangen, seit 04/2004.
- Lehrbelastungskommission, seit 05/2002.
Former:
- Mitglied Berufungsausschuss W3 Data- und Software-Engineering (Nachfolge Leis), 12/2022-08/2024.
- Mitglied Berufungskommission W3 Informatik (Systemsoftware), Friedrich-Schiller Universität FSU Jena, 01/2020-06/2022
- Vorsitzender Berufungsausschuss W2 Didaktik der Informatik (Nachfolge Romeike), 11/2018-11/2019.
- Kommissarische Leitung der Professur für Didaktuk der Infrmatik, 10/2018-11/2019.
- Mitglied der Studienkommission Informatik, 11/2018-11/2019.
- Mitglied des Vorstands des Zentrums für Lehrerbildung, 10/2018-11/2019.
- Mitglied Berufungsausschuss W3 Experimentelle Astroteilchenphysik (Nachfolge Anton), 12/2017-05/2019.
- Mitglied Berufungsausschuss W3 Visual Computing (Nachfolge Greiner), 06/2016-12/2017.
- Mitglied der Raum- und Baukommission der Technischen Fakultät, 10/2013-04/2015.
- Stellvertretender Sprecher der Kollegialen Leitung des Department Informatik, 10/2011-09/2013.
- Kommissarische Leitung der Professur für Didaktik der Informatik, 08/2012-09/2013.
- Vorsitzender Berufungsverfahren W2-Professur Didaktik der Informatik (Nachfolge Brinda), 07/2012-02/2013.
- Mitglied der Prüfungsausschusses MA Internationale Wirtschaftsinformatik, 12/2008-11/2019.
- Mitglied der Berufungskommission W1-Professur für Digitalen Sport, 12/2009-02/2011.
- Mitglied der Berufungskommission W3-Professur für IT-Sicherheitsinfrastrukturen, 10/2009-12/2010.
- Schriftführer des Berufungsausschusses W2-Professur Open Source Software, 07/2008-09/2009.
- Externes Mitglied der Berufungskommission W3-Professur für Softwaresysteme an der Universität Passau, 06/2008-10/2008.
- Mitglied des Senats und des Hochschulrats der Friedrich-Alexander-Universität, 10/2007-09/2009.
- Mitglied des Fachbereichsrats der Technischen Fakultät, 10/2004-09/2009.
- Mitglied der Kommission zur Verteilung und Verwendung der Studienbeiträge der Informatik, 11/2006-09/2009 (für den Studiengang Informatik: 11/2006-09/2007, für den Studiengang IuK 10/2007-09/2009, 05/2010-09/2010)
- Mitglied der Berufungskommission W2-Professur für technisch-wissenschaftl. Höchstleistungsrechnen, 05/2006-12/2007.
- IT-Generalist der DFG-Expertenkommission zur Begleitung des Projekts Online-Wahl der Fachkollegien 2007, 04/2006-06/2008.
- Mitglied der Berufungskommission W2-Professur für Informatik (Datenbanksysteme, Nachfolge Jablonski), 01/2006-09/2007.
- Kommissarische Leitung des Lehrstuhls Informatik 3, Rechnerarchitektur, 10/2005-02/2009.
- Exportbeauftragter des Instituts für Informatik, 10/2005-11/2007.
- Arbeitsgruppe Bachelor/Master für den Studiengang Informatik, 05/2005-08/2007.
- Arbeitsgruppe Bachelor/Master für den Studiengang Informations- und Kommunikationstechnik, 05/2005-01/2006.
- Geschäftsführender Vorstand des Instituts für Informatik, 10/2004-09/2005.
- Mitglied der Strukturkommission der Technischen Fakultät, 10/2004-09/2005.
- Mitglied des Consilium Techfak, 10/2004-09/2005.
- Vorstand des Interdisziplinären Zentrums für funktionale Genomik, FUGE, 09/2004-06/2009.
- Arbeitsgruppe Bibliotheksmodernisierung, 04/2004-12/2012.
- Mitglied der Berufungskommission W3-Professur für Informatik (Rechnerarchitektur, Nachfolge Dal Cin), 11/2003-02/2009.
- Mitglied der Studienkommission Informations- und Kommunikationstechnik, 10/2003-09/2005.
- Mitglied der Studienkommission Wirtschaftsinformatik, seit 04/2002.
- Mitglied der Studienkommission Informatik, 04/2002-09/2011.
- Senatsberichterstatter Berufungsverfahren C3-Professur Organische Chemie (Nachf. Saalfrank), 08/2004-02/2005.
- Schriftführer Berufungsverfahren C3-Professur Didaktik der Informatik, 04/2004-03/2005.
- Mitglied der Berufungskommission C3-Professur für Informatik (Nachfolge Müller), 11/2003-07/2004.
- Mitglied der Berufungskommission C3-Professur für Numerische Simulation mit Höchstleistungsrechnern, 07/2002-01/2003.
- Mitglied der Berufungskommission C4-Professur für Informatik (Rechnernetze und Kommunikationssysteme), Nachfolge Herzog, 04/2002-02/2003.
- ACM Transactions on Software Engineering and Methodology, TOSEM
- ACM Transactions on Programming Languages and Systems, TOPLAS
- IEEE Transactions on Parallel and Distributed Systems
- Journal of Parallel and Distributed Computing
- Concurrency – Practice and Experience
- Software – Practice and Experience
- Journal of Systems and Software
- Informatik-Spektrum
- Informatik – Forschung und Entwicklung
- GI, Gesellschaft für Informatik, since 1987
- ACM, Association for Computing Machinery, since 1990
- IEEE, Institute of Electrical and Electronics Engineers, since 1993. Senior Member since 2020
2023
Visualisierung der Statik, Dynamik und Infrastruktur von Software mit Hilfe der Stadt‐Metapher (Dissertation, 2023)
URL: https://opus4.kobv.de/opus4-fau/files/23373/DissertationVeronikaDashuberPress.pdf
BibTeX: Download
:
2022
Ein datenparalleler Ansatz zur Beschleunigung von Datenflussanalysen mittels GPU (Dissertation, 2022)
DOI: 10.25593/978-3-96147-494-3
BibTeX: Download
:
2021
Datengetriebene Methoden zur Bestimmung von Position und Orientierung in funk‐ und trägheitsbasierter Koppelnavigation (Dissertation, 2021)
URL: https://nbn-resolving.org/urn:nbn:de:bvb:29-opus4-173550
BibTeX: Download
:
2019
Effiziente Speicherung von Zeitreihen mit Betriebsdaten aus Software-Systemen zur Analyse von Laufzeitanomalien (Dissertation, 2019)
URL: http://www.shaker.de/shop/978-3-8440-6785-9
BibTeX: Download
:
2018
Learning Code Transformations from Repositories (Dissertation, 2018)
DOI: 10.25593/978-3-96147-142-3
BibTeX: Download
:
2017
Modellierung und effiziente Ausführung von Softwareentwicklungsprozessen (Dissertation, 2017)
URL: http://nbn-resolving.de/urn:nbn:de:bvb:29-opus4-82628
BibTeX: Download
:
Eine domänenspezifische Sprache zur Analyse von Software-Verfolgbarkeitsinformationen (Dissertation, 2017)
URL: https://www.shaker.de/de/content/catalogue/index.asp?lang=de&ID=8&ISBN=978-3-8440-5689-1&search=yes
BibTeX: Download
:
2014
Compiler and Runtime Techniques to Identify and Optimize Atomic Blocks in Parallel Programs (Dissertation, 2014)
BibTeX: Download
:
Latency Minimization of Order-Preserving Distributed Event-Based Systems (Dissertation, 2014)
BibTeX: Download
:
2012
Modellbasierte Extraktion, Repräsentation und Analyse von Traceability-Informationen (Dissertation, 2012)
URL: https://www2.cs.fau.de/publication/download/2012_Dissertation_JosefAdersberger.pdf
BibTeX: Download
:
Dynamische probabilistische Bewegungsmodelle mittels Verhaltensmodellierung (Dissertation, 2012)
BibTeX: Download
:
2010
Graphbasierte Prozedurale Abstraktion (Dissertation, 2010)
BibTeX: Download
:
Improved DSM Efficiency, Flexibility, and Correctness (Habilitation, 2010)
URL: https://opus4.kobv.de/opus4-fau/files/1543/paper.pdf
BibTeX: Download
:
Attribute Grammar Based Genetic Programming (Dissertation, 2010)
BibTeX: Download
:
2009
Reparallelization and Migration of OpenMP Applications in Grid Environments (Dissertation, 2009)
BibTeX: Download
:
Ein agentenbasierter evolutionärer Adaptions- und Optimierungsansatz für verteilte Systeme (Dissertation, 2009)
BibTeX: Download
:
Dynamische Programm-Code-Verwaltung und -Optimierung für eingebettete Systeme (Dissertation, 2009)
URL: http://www.opus.ub.uni-erlangen.de/opus/volltexte/2009/1544/pdf/DominicSchellDissertation.pdf
BibTeX: Download
:
2007
Automatische Generierung optimaler struktureller Testdaten für objekt-orientierte Software mittels multi-objektiver Metaheuristiken (Dissertation, 2007)
URL: https://www.ps.tf.fau.de/files/2020/04/norbertoster_dissertation2007.pdf
BibTeX: Download
:
2006
Optimisation of the Allocation of Functions in Vehicle Networks (Dissertation, 2006)
URL: http://www2.informatik.uni-erlangen.de/publication/download/diss-hardung.pdf
BibTeX: Download
:
Modellbasierte Generierung von Beherrschungsmechanismen für Inkonsistenzen in komponentenbasierten Systemen (Dissertation, 2006)
BibTeX: Download
:
2004
Advanced Compiling Techniques to reduce RAM Usage of Static Operating Systems (Dissertation, 2004)
URL: https://opus4.kobv.de/opus4-fau/frontdoor/index/index/docId/65
BibTeX: Download
:
Syntaxanalyse auf Basis der Dependenzgrammatik (Dissertation, 2004)
BibTeX: Download
:
2003
Ein Modell zur Beschreibung und Lösung von Zeitplanungsproblemen (Dissertation, 2003)
BibTeX: Download
:
Erfolge und Probleme evolutionärer Algorithmen, induktiver logischer Programmierung und ihrer Kombination (Habilitation, 2003)
BibTeX: Download
:
Ein sprachunabhängiger Ansatz zur Entwicklung deklarativer, robuster LA-Grammatiken mit einer exemplarischen Anwendung auf das Deutsche und das Englische (Dissertation, 2003)
BibTeX: Download
:
2002
Integrierte Hardware- und Softwareplanung flexibler Fertigungssysteme (Dissertation, 2002)
BibTeX: Download
:
Structural Coverage Criteria for Testing Object-Oriented Software (Dissertation, 2002)
BibTeX: Download
:
Own:
2001
Leistungsaspekte Paralleller Objektorientierter Programmiersprachen (Habilitation, 2001)
BibTeX: Download
:
1994
Optimierungstechniken zur Übersetzung paralleler Programmiersprachen (Dissertation, 1994)
BibTeX: Download
:
- Marqc Schanne: Software-Architekturen für lokalitätsabhägige Diensterbringung auf mobilen Endgeräten. [DA]
Betreuer: Philippsen, M.: abgeschlossen 2002 - Sven Buth: Persistenz von verteilten Objekten im Rahmen eines offenen, verteilten eCommerce-Frameworks. [DA]
Betreuer: Philippsen, M.: abgeschlossen 2002 - Jochen Reber: Verteilter Garbage Collector für JavaParty. [SA]
Betreuer: Philippsen, M.: abgeschlossen 2000 - Thorsten Schlachter: Entwicklung eines Java-Applets zur diagrammbasierten Navigation innerhalb des WWW. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1999 - Edwin Günthner: Komplexe Zahlen für Java. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1999 - Christian Nester: Ein flexibles RMI Design für eine effiziente Cluster Computing Implementierung. [DA]
Betreuer: Philippsen, M. abgeschlossen 1999 - Daniel Lukic: ParaStation-Anbindung für Java. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1998 - Jörg Afflerbach: Vergleich von verteilten JavaParty-Servlets mit äquivalenten CGI-Skripts. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1998 - Thomas Dehoust: Abbildung heterogener Datensätze in Java. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1998 - Guido Malpohl: Erkennung von Plagiaten unter einer Vielzahl von ähnlichen Java-Programmen. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1997 - Bernhard Haumacher: Lokalitätsoptimierung durch statische Typanalyse in JavaParty. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1997 - Matthias Kölsch: Dynamische Datenobjekt- und Threadverteilung in JavaParty. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1997 - Christian Nester: Parallelisierung rekursiver Benchmarks für JavaParty mit expliziter Datenobjekt- und Threadverteilung. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1997 - Matthias Jacob: Parallele Realisierung geophysikalischer Basisalgorithmen in JavaParty. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1997 - Oliver Reiff: Optimierungsmöglichkeiten für Java-Bytecode. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1996 - Marc Schanne: Laufzeitverhalten und Portierungsaspekte der Java-VM und ausgewählter Java-Bibliotheken. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1996 - Edwin Günthner: Portierung der Java VM auf den Multimedia Video Prozessor MVP TMS320C80. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1996 - Matthias Zenger: Transparente Objektverteilung in Java. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1996 - Matthias Winkel: Erweiterung von Java um ein FORALL. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1996 - Roland Kasper: Modula-2*-Benchmarks in einem Netz von Arbeitsplatzrechnern. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1993 - Markus Mock: Alignment in Modula-2*. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1992 - Stefan Hänßgen: Ein symbolishcer X Windows Debugger für Modula-2*. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1992 - Paul Lukowicz: Code-Erzeugung für Modula-2* für verschiedene Maschinenarchitekturen. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1992 - Hendrik Mager: Die semantische Analyse von Modula-2*. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1992 - Ernst Heinz: Automatische Elimination von Synchronisationsbarriere in synchronen FORALLs. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1991 - Stephan Teiwes: Die schnellste Art zu multiplizieren? – Der Algorithmus von Schönhage und Strassen auf der Connection Machine. [SA]
Betreuer: Philippsen, M.: abgeschlossen 1991 - Ralf Kretzschmar: Ein Modula-2*-Übersetzer für die Connection Machine. [DA]
Betreuer: Philippsen, M.: abgeschlossen 1991
04/02 – heute | Full Professor (W3), Chair of the Programming Systems Group (Informatik 2) of the Friedrich-Alexander Universität Erlangen-Nürnberg, Germany |
---|---|
06/10 | Rejected appointment as Full Professor (W3) for Parallel and Distributed Architectures at the Johannes-Gutenberg University Mainz |
01/98 – 03/02 | Department manager of the Softwaretechnik/Authorized Java Center group at FZI Forschungszentrum Informatik, Karlsruhe, Germany |
09/95 – 09/01 | Assistant Professor (Hochschulassistent, C1) at IPD, Institute for Programming Systems, Chair of Prof. Tichy, at KIT, Karlsruhe Institute of Technology |
09/94 – 08/95 | Post-Doc at ICSI (International Computer Science Institute) of the University of Berkeley, California |
02/90 – 08/94 | Research Assistant (BAT IIa) and PhD student at IPD, Institute for Programming Systems, Chair of Prof. Tichy, at KIT, Karlsruhe Institute of Technology |
Education
07/01 | Habilitation in Computer Science at KIT, Karlsruhe Institute of Technology, Topic: Performance Aspects of Parallel Object-Oriented Programming Languages. | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11/93 | PhD (Dr. rer. nat.) in Computer Science (summa cum laude), at KIT, Karlsruhe Institute of Technology, Topic: Optimization Techniques for Compiling Parallel Programming Languages; Advisors: Prof. Dr. Walter F. Tichy and Prof. Dr. G. Goos | ||||||||||||
WS 85/86 – 89/90 | Diplom (BA and MA) in Computer Science with Minor Industrial Engineering and Management (Wirtschaftsingenieurwesen), at KIT, Karlsruhe Institute of Technology
|
||||||||||||
05/85 | Abitur (secondary school exam/university entrance level qualification), (1.6, third of class of 1985) | ||||||||||||
08/76 – 05/85 | Theodor-Heuss-Gymnasium, Essen-Kettwig, Germany | ||||||||||||
08/72 – 06/76 | Schmachtenbergschule, Kettwig, Germany |
Prizes, awards, nominations
2023 |
|
2021 |
|
---|---|
2019 |
|
2015 |
|
2014 | von der Technischen Fakultät der Friedrich-Alexander Universität Erlangen-Nürnberg und ihrer Fachschaft Informatik nominiert für den Ars legendi-Preis für exzellente Hochschullehre des Stifterverbands und der Hochschulrektorenkonferenz |
2008 |
|
2008 |
|
International experience
05/15 – 16/15 | ICSI (International Computer Science Institute) of UCB, University of Berkeley, CA |
---|---|
12/10 – 03/11 | Microsoft Research, Research in Software Engineering (RiSE) Group, Redmond, WA |
09/94 – 08/95 | ICSI (International Computer Science Institute) of UCB, University of Berkeley, CA |
02/96 – 04/96 | another research stay at ICSI in Berkeley, CA |
02/92 – 03/92 | Research stay at INRIA (Institut National de Recherche en Informatique et en Automatique), Sophia Antipolis, France |
02/91 – 03/91 | another research stay at INRIA in Sophia Antipolis, France |
02/90 – heute | Countless trips to international scientific conferences to give formal presentations |
Consulting
04/91 – heute | Self-empoyed management consulting and expert’s reports for various industry and crafts enterprises |
---|---|
10/99 – 01/13 | Design and development of a use-case specific Content-Management-Systems; for ISO Arzneimittel GmbH & Co. KG |
12/95 – 12/97 | Design and development of a Java extension for scalable Internet services and electronic trade; .for Electric Communities, CA |
01/96 – 05/96 | Design of an applicaiton for Mercedes-Benz Lease & Finanz GmbH (now Mercedes-Benz-Bank AG) |
07/85 – 03/91 | Working student at Stinnes Organisationsberatung GmbH, various tasks across both Stinnes AG (now DB Schenker AG) and Veba AG (now E.ON AG) |
01/84 – 12/86 | Freelance system analyst and software developer at the headquarter of Horten AG (now Galeria Karstadt Kaufhof GmbH) |
07/84 – 08/84 | Working student at Brenntag Mineralöl GmbH; analysis and black box testing of an externally procured merchandise planning and control system |