Tobias Feigl
Dr.-Ing. Tobias Feigl
-
Verification and validation in industrial practice
(Own Funds)
Term: since 01.01.2022Detection of flaky tests based on software version control data and test execution history
Regression tests are carried out often and because of their volume also fully automatically. They are intended to ensure that changes to individual components of a software system do not have any unexpected side effects on the behavior of subsystems that they should not affect. However, even if a test case executes only unmodified code, it can still sometimes succeed and sometimes fail. This so-called "flaky" behavior can have different reasons, including race conditions due to concurrent execution or temporarily unavailable resources (e.g., network or databases). Flaky tests are a nuisance to the testing process in every respect, because they slow down or even interrupt the entire test execution and they undermine the confidence in the test results: if a test run is successful, it cannot necessarily be concluded that the program is really error-free, and if the test fails, expensive resources may have to be invested to reproduce and possibly fix the problem.
The easiest way to detect test flakyness is to repeatedly run test cases on the identical code base until the test result changes or there is a reasonable statistical confidence that the test is non-flaky. However, this is rarely possible in an industrial environment, as integration or system tests can be extremely time-consuming and resource-demanding, e.g., because they require the availability of special test hardware. For this reason, it is desirable to classify test cases with regard to their flakyness without repeated re-execution, but instead to only use the information already available from the preceding development and test phases.
In 2022, we implemented and compare various so-called black box methods for detecting test flakyness and evaluated them in a real industrial test process with 200 test cases. We classify test cases exclusively on the basis of generally available information from version control systems and test execution tools, i.e., in particular without an extensive analysis of the code base and without monitoring of the test coverage, which would in most cases be impossible for embedded systems anyway. From the 122 available indicators (including the test execution time, the number of lines of code, or the number of changed lines of code in the last 3, 14, and 54 days) we extracted different subsets and examined their suitability for detecting test flakyness using different techniques. The methods applied on the feature subsets include rule-based methods (e.g., "a test is flaky if it has failed at least five times within the observation window, but not five times in a row"), empirical evaluations (including the computation of the cumulative weighted "flip rate", i.e., the frequency of alternating between test success and failure) as well as various methods from the domain of machine learning (e.g., classification trees, random forest, or multi-layer perceptrons). By using AI-based classifiers together with the SHAP approach for explaining AI models we determined the four most important indicators ("features") for detecting test flakyness in the industrial environment under consideration. The so-called "gradient boosting" with the complete set of indicators has proven to be optimal (with an F1-score of 96.5%). The same method with only four selected features achieved just marginally lower accuracy and recall values (with almost the same F1 score).
Synergies of a-priori and a-posteriori analysis methods to explain artificial intelligence
Artificial intelligence is rapidly conquering more domains of everyday life and machines make more critical decisions: braking or evasive maneuvers in autonomous driving, credit(un)worthiness of individuals or companies, diagnosis of diseases from various examination results (e.g., cancer detection from CT/MRT scans), and many more. In order for such a system to receive trust in a real-life productive setting, it must be ensured and proven that the learned decision rules are correct and reflect reality. The training of a machine model itself is a very resource-intensive process and the quality of the result can usually only be quantified afterwards with extremely great effort and well-founded specialist knowledge. The success and quality of the learned model not only depends on the choice of a particular AI method, but is also strongly influenced by the magnitude and quality of the training data.
In 2022, we therefore examined which qualitative and quantitative properties an input set must have ("a priori evaluation") in order to achieve a good AI model ("a posteriori evaluation"). For this purpose, we compared various evaluation criteria from the literature and we defined four basic indicators based on them: representativeness, freedom from redundancy, completeness, and correctness. The associated metrics allow a quantitative evaluation of the training data in advance of preparing the model. To investigate the impact of poor training data on an AI model, we experimented with the so-called "dSprites" dataset, a popular generator for image files used in the evaluation of image resp. pattern recognition methods. This way, we generated different training data sets that differ in exactly one of the four basic indicators and have quantitatively different "a priori quality". We used all of them to train two different AI models: Random Forest and Convolutional Neural Networks. Finally, we quantitatively evaluated the quality of the classification by the respective model using the usual statistical measures (accuracy, precision, recall, F1-score). In addition, we used SHAP (a method for explaining AI models) to determine the reasons for any misclassification in cases of poor data quality. As expected, the model quality highly correlates with the training data quality: the better the latter is with regard to the four basic indicators, the more precise is the classification of unknown data by the trained models. However, a noteworthy discovery has emerged while experimenting with the lack of redundancy: If a trained model is evaluated with completely new/unknown inputs, the accuracy of the classification is sometimes significantly worse than if the available input data is split into a training and an evaluation data set: In the latter case, the a posteriori evaluation of the trained AI system misleadingly suggests a higher model quality.
Few-Shot Out-of-Domain Detection in Natural Language Processing Applications
Natural language processing (NLP for short) using artificial intelligence has many areas of application, e.g., telephone or written dialogue systems (so-called chat bots) that provide cinema information, book a ticket, take sick leave, or answer various questions arising during certain industrial processes. Such chat bots are often also involved in social media, e.g., to recognize critical statements and to moderate them if necessary. With increasing progress in the field of artificial intelligence in general and NLP in particular, self-learning models are spreading that dynamically (and therefore mostly unsupervised) supplement their technical and linguistic knowledge from concrete practical use. But such approaches are susceptible to intentional or unintentional malicious disguise. Examples from industrial practice have shown that chat bots quickly "learn" for instance racist statements in social networks and then make dangerous extremist statements. It is therefore of central importance that NLP-based models are able to distinguish between valid "In-Domain (ID)" and invalid "Out-Of-Domain (OOD)" data (i.e., both inputs and outputs). However, the developers of an NLP system need an immense amount of ID and OOD training data for the initial training of the AI model. While the former are already difficult to find in sufficient quantities, the a priori choice of the latter is usually hardly possible in a meaningful way.
In 2022, we therefore examined and compared different approaches to OOD detection that work with little to no training data at all (hence called "few-shot"). The currently best and most widespread, transformer-based and pre-trained language model RoBERTa served as the basis for the experimental evaluation. To improve the OOD detection, we applied "fine-tuning" and examined how reliable the adaptation of a pre-trained model to a specific domain can be done. In addition, we implemented various scoring methods and evaluated them to determine threshold values for the classification of ID and OOD data. To solve the problem of missing training data, we also evaluated a technique called "data augmentation": with little effort GPT3 ("Generative Pretrained Transformer 3", an autoregressive language model that uses deep learning to generate human-like text) can generate additional and safe ID and OOD data to train and evaluate NLP models.
Application of weighted combinatorics in the generation and selection of parameters and their representatives in software testing
Some functional testing methods (so-called black box tests), such as the equivalence class testing or boundary value analysis, focus on individual parameters. For these parameters, they determine representatives (values or classes of values) to be considered in the test. Since not just a single parameter but several parameters are usually required to perform such tests, representatives of several parameters must be combined with each other to be used for test execution. Well-understood combinatorial methods such as "All Combinations", "Pair-wise" or "Each choice" are usually used for this purpose. They do not take into account information about weights (attributes such as importance or priority) of the parameters and equivalence class representatives, which would affect the number of associated test cases (e.g. due to importance) or their recommended order (in terms of prioritization). In addition, in the case of the equivalence class method, there are scenarios in which a combination of several invalid classes in a single test case could optionally be explicitly desired, completely undesirable or limited to a certain number in order to specifically test fault combinations on the one hand, but also to simplify fault localization on the other. There is reason to believe that by considering such weights and options, more targeted and ultimately more efficient test cases can be derived.
In 2023, we evaluated and compared known combinatorial approaches that take into account weights when combining parameters or their values. Based on this, we developed a novel approach to generate and select parameters and their representatives in software testing. The proposed method uses a weighting system to prioritize the individual parameters, their equivalence classes and concrete representatives, in a set of test cases. If necessary, their interactions can also be specifically weighted in order to allow certain combinations to occur more frequently in the generated test cases. To evaluate the approach, we defined a suitable prototype data structure that represents the various weightings. We then implemented evaluation functions for existing sets of test cases in order to quantitatively determine how well such a test case set satisfies the specified combinatorics. In a further step, we used these evaluation functions in combination with various systematic methods and heuristics (SAT solver Z3, simulated annealing, and genetic algorithms) to generate new test cases that match the weighting or to optimize existing sets by adding missing test cases. Simulated Annealing was the fastest and gave the best results in the test series. Although the SAT-approach worked well for small problems, it was no longer practical for larger test cases due to exorbitant runtimes.
-
Recurrent Neuronal Networks (RNNs) for Real-Time Estimation of Nonlinear Motion Models
(Third Party Funds Single)
Term: 01.10.2017 - 31.03.2021
Funding source: Fraunhofer-Gesellschaft
URL: https://www2.cs.fau.de/research/RuNN/With the growing availability of information about an environment (e.g., the geometry of a gymnasium) and about the objects therein (e.g., athletes in the gymnasium), there is an increasing interest in bringing that information together profitably (so-called information fusion) and in processing that information. For example, one would like to reconstruct physically correct animations (e.g., in virtual reality, VR) of complex and highly dynamic movements (e.g., in sports situations) in real-time. Likewise, e.g., manufacturing plants of the industry, which suffer from unfavorable environmental conditions (e.g., magnetic field interference or missing GPS signal), benefit from, e.g., high-precision goods location. Typically, to describe movements, one uses either poses that describe a "snapshot" of a state of motion (e.g., idle state, stoppage), or a motion model that describes movement over time (e.g., walking or running). In addition, human movements may be identified, detected, and sensed by different sensors (e.g., on the body) and mapped in the form of poses and motion models. Different types of modern sensors (e.g., camera, radio, and inertial sensors) provide information of varying quality.In principle, with the help of expensive and highly precise measuring instruments, the extraction of the poses and resp. of the motion model, for example, from positions on small tracking areas is possible without errors. Positions, e.g., of human extremities, can describe or be described by poses and motion models. Camera-based sensors deliver the required high-frequency and high-precision reference measurements on small areas. However, as the size of the tracking surface increases, the usability of camera-based systems decreases (due to inaccuracies or occlusion issues). Likewise, on large areas radio and inertial sensors only provide noisy and inaccurate measurements. Although a combination of radio and inertial sensors based on Bayesian filters achieves greater accuracy, it is still inadequate to precisely sense human motion on large areas, e.g., in sports, as human movement changes abruptly and rapidly. Thus, the resulting motion models are inaccurate.Furthermore, every human movement is highly nonlinear (or unpredictable). We cannot map this nonlinearity correctly with today's motion models. Bayes filters describe these models but these (statistical) methods break down a nonlinear problem into linear subproblems, which in turn cannot physically represent the motion. In addition, current methods produce high latency when they require accuracy.Due to these three problems (inaccurate position data on large areas, nonlinearity, and latency), today's methods are unusable, e.g., for sports applications that require short response times. This project aims to counteract these nonlinearities by using machine learning methods. The project includes research on recurrent neural networks (RNN) to create nonlinear motion models. As modern Bayesian filtering methods (e.g., Kalman and Particle filters) and other statistical methods can only describe the linear portions of nonlinear human movements (e.g., the relative position of the head w.r.t. trunk while walking or running) they are thus physically not completely correct.Therefore, the main goal is to evaluate how machine learning methods can describe complex and nonlinear movements. We therefore examined whether RNNs describe the movements of an object physically correctly and support or replace previous methods. As part of a large-scale parameter study, we simulated physically correct movements and optimized RNN procedures on these simulations. We successfully showed that, with the help of suitable training methods, RNN models can either learn physical relationships or shapes of movement.
This project addresses three key topics:
I. A basic implementation investigates how and why methods of machine learning can be used to determine models of human movement.
In 2018, we first established a deeper understanding of the initial situation and problem definition. With the help of different basic implementations (different motion models) we investigated (1) how different movements (e.g., humans: walk, run, slalom; vehicles: meander, zig-zag) affect measurement inaccuracies of different sensor families, (2) how measurement inaccuracies of different sensor families (e.g., visible orientation errors, audible noise, and deliberated artificial errors) affect human motion, and (3) how different filter methods for error correction (that balance accuracy and latency) affect both motion and sensing. In addition, we showed (4) how measurement inaccuracies (due to the use of current Bayesian filtering techniques) correlate nonlinearly with human posture (e.g., gait apparatus) and predictably affect health (simulator sickness) through machine learning.We studied methods of machine and deep learning for motion detection (humans: head, body, upper and lower extremity; vehicles: single- and bi-axial) and motion reconstruction (5) based on inertial, camera, and radio sensors, as well as various methods for feature extraction (e.g., SVM, DT, k-NN, VAE, 2D-CNN, 3D-CNN, RNN, LSTM, M/GRU). These were interconnected into different hybrid filter models to enrich extracted features with temporal and context-sensitive motion information, potentially creating more accurate, robust, and close to real-time motion models. In this way, these mechanics learned (6) motion models for multi-axis vehicles (e.g., forklifts) based on inertial, radio, and camera data, which generalize for different environments or tracking surfaces (with varying size, shape, and sensory structure, e.g., magnetic field, multipath, texturing, and illumination). Furthermore (7), we gained a deeper understanding of the effects of non-constant accelerated motion models on radio signals. On the basis of these findings, we trained an LSTM model that predicts different movement speeds and motion forms of a single-axis robot (i.e., Segway) close to real-time and more accurately than conventional methods.
In 2019, we found that these models can also predict human movement (human movement model). We also determined that the LSTM models can either be fully self-sufficient at runtime or integrated as support points into localization estimates, e.g., into Pedestrian Dead Reckoning (PDR) methods.
II. Based on this, we try to find ways to optimize the basic implementation in terms of robustness, latency, and reusability.
In 2018, we used the findings from I. (1-7) to stabilize so-called (1) relative Pedestrian Dead Reckoning (PDR) methods using motion classifiers. These enable a generalization to any environment. A deeper radio signal understanding (2) allowed to learn long-term errors in RNN-based motion models. This improves the position accuracy, stability, and a near real-time prediction. First experiments showed the robustness of the movement models (3) with the help of different real (unknown to the models) movement trajectories for one- and two-axis vehicles. Furthermore, we investigated (4) how hybrid filter models (e.g., interconnection of feature extractors such as 2D/3D-CNNs and time-series trackers such as RNNs-LSTM) provide more accurate, more stable, and filtered (outlier-corrected) results.
In 2019, we showed that models of the RNN family extrapolate movements into the future so that they compensate for the latency of the processing pipeline and beyond. Furthermore, we examined the explainability, interpretability, and robustness of the models examined here, and their reusability on the human movement.With the help of a simulator, we generated physically correct movements, e.g., positions of pedestrians, cyclists, cars, and planes. Based on this data, we showed that RNN models can interpolate between different types of movement and can compensate for missing data points, interpret white and random noise as such, and can extrapolate movements. The latter enables processing-specific latency to be compensated and enables human movement to be predicted from radio and inertial data in real time.Novel RNN architecture. Furthermore, in 2019, we researched a new architecture, or topology, of a neural network, that balances the strengths and weaknesses of flat neural networks (NN) and recurrent networks. We found this optimal NN for determining physically correct movement in a large-scale parameter study. In particular, we also optimized the model architecture and parameters for human-centered localization. These optimal architectures predict human movement far into the future from as little sensor information as possible. The architecture with the least localization error combines two DNNs with an RNN.Interpretability of models. In 2019, we examined the functionality of this new model. For this purpose, we researched a new process pipeline for the interpretation and explanation of the model. The pipeline uses the mutual information flow and the mutual transfer entropy in combination with various targeted manipulations of the hidden states and suitable visualization techniques to describe the state of the model at any time, both subjectively and objectively. In addition, we adapted a variational auto-encoder (VAE) to better visualize and interpret extracted features of a neural network. We designed and parameterized the VAE such that the reconstruction error of the signal is within the range of the measurement noise and at the same time forced the model to store disentangled features in its latent space. This disentanglement enabled the first subjective statements about the interrelationships of the features that are really necessary to optimally code the channel state of a radio signal.Compression. In 2019, we discovered a side effect of the VAE that offers the possibility of decentralized preprocessing of the channel information directly on the antenna. This compression then reduces the data traffic, lowers the communication load, and thus increases the number of possible participants in the communication and localization in a closed sensor network.Influence of the variation of the input information. In 2019, we also examined how changes in the input sequence length of a recurrent neural network affect the learning success and the type of results of the model. We discovered that a longer sequence persuades the model to be a motion model, i.e., to learn the form of movement, while with shorter sequences the model tends to learn physical relationships. The optimal balance between short and long sequences represents the highest accuracy.We investigated speed estimation using the new method. When used in a PDR model, this increased the position accuracy. An initial work in 2019 has examined in detail which methods are best suited to estimate the speed of human movement from a raw inertial signal. A new process, a combination of a one-dimensional CNN and a BLSTM, has replaced the state of the art.
In 2020, we optimized the architecture of the model,with regard to its prediction accuracy and investigated the effects of a deep fusion of Bayesian and DL methods on the prediction accuracy and robustness.
Optimization. In 2020, we improved the existing CNN and RNN architecture and proposed the fusion of ResNet and BLSTM. We replaced the CNN with a residual network to extract deeper and higher quality features from a continuous data stream. We showed that this architecture entails higher computing costs, but surpasses the accuracy of the state-of-the-art. In addition, the RNN architecture can be scaled down to counter the blurring of the context vector of the LSTM cells with very long input sequences, as the remaining ResNet network offers more qualitative features.Deep Bayesian Method. In 2020 we investigated whether methods of the RNN family can extract certain movement properties from recorded movement data to replace the measurement-, process-, and transition-noise distributions of a Kalman filter (KF). We showed that highly optimized LSTM cells can reconstruct trajectories more robust (low error variance) and more precise (positional accuracy) than an equally highly optimized KF. The deep coupling of LSTM in KF, so-called Deep Bayes, provided the most robust and precise positions and trajectories. This study also showed that methods trained on realistic synthetic data, the Deep Bayesian method, needed the least real data to adapt to a new unknown domain, e.g., unknown motion shapes and velocity distribution.III. Finally, a demonstration of feasibility shall be tested.
In 2018, a large-scale social science study opened the world's largest virtual dinosaur museum and showed that (1) a pre-selected (application-optimized) model of human movement robustly and accurately (i.e., without a significant impact on simulator sickness) maps human motion, resp. predicts it. We used this as a basis for comparison tests with other models that are human-centered and generalize to different environments.
In 2019, we developed two new live demonstrators that are based on the research results achieved in I and II. (1) A model railway that crosses a landscape with a tunnel at variable speeds. The tunnel represents realistic and typical environmental characteristics that lead to nonlinear multipath propagation of a radio transmitter to be located and ultimately to an incorrectly determined position. This demonstrator shows that the RNN methods researched as part of the research project can localize highly precisely and robustly, both on complex channel impulse responses and on dimensionally reduced response times, and also deliver better results than conventional Kalman filters. (2) We used the second demonstrator to visualize the movement of a person's upper extremity. We recorded human movement using inexpensive inertial sensors attached to both arm joints, classified using machine-based and deep learning, and derived motion parameters. A graphic user interface visualizes the movement and the derived parameters in near real time.The planned generalizability, e.g., of human-centered models and the applicability of RNN-based methods in different environments, has been demonstrated using (1) and (2).In 2019, we applied the proposed methods in the following applications:Application: Radio Signal. We classified the channel information of a radio system hierarchically. We translated the localization problem of a Line of Sight (LoS) and Non Line of Sight (NLoS) classifier into a binary problem. Hence, we now can precisely localize a position within a meter, based on individual channel information from a single antenna if the environment provides heterogeneous channel propagation.Furthermore, we simulated LoS and NLoS channel information and used it to interpolate between different channels. This enables the providers of radio systems to respond to changing or new environments in the channel information a-priori in the simulation. By selectively retraining the models with the simulated knowledge, we enabled more robust models.Application: Camera and Radio Signal. We have shown how the RNN methods relate to information from other sensor families, e.g., video images, by combining radio and camera systems when training a model, the two sensor information streams merge smoothly, even in the event of occlusion of the camera. This yields a more robust and precise localization of multiple people.Application: Camera Signal. We used an RNN method to examine the temporal relationships between events in images. In contrast to the previous work, which uses heterogeneous sensor information, this network only uses image information. However, the model uses the image information in such a way that it interprets the images differently: as spatial information, i.e., a single image, and as temporal information, i.e., several images in the input. This splitting implies that individual images can be used as two fictitious virtual sensor information streams to recognize results spatially (features) and to better predict them temporally (temporal relationships). Another work uses camera images to localize the camera itself. For this purpose, we built a new processing pipeline that breaks up the video signal over time and learns absolute and relative information in different neural networks and merges their outputs into an optimal pose in a fusion network.Application: EEG Signal. In a cooperation project we applied the researched methods to other sensor data. We recorded beta- and gamma-waves of the human brain in different emotional states. When used to train an RNN, it correctly predicted the emotions of a test person in 90% of all cases from raw EEG data. Application: Simulator Sickness. We have shown how the visualization in VR affects human perception and movement anomalies, resp. simulator sickness, and how the neural networks researched here can be used to predict the effects.In 2020, we developed a new live demonstrator based on the research results achieved in II.Application: Gait reconstruction in VR. In 2020, we used the existing CNN-RNN model to predict human movement, namely gait cycle and gait phases, using sensor data from a head-mounted inertial sensor to visualize a virtual avatar in VR in real time. We showed that the DL model has significantly lower latencies than the state-of-the-art, since it can recognize gait phases earlier and predict future ones more precisely. However, this is at the expense of the required computing effort and thus the required hardware.The project was successfully completed in 2021. In 2021, as part of a successfully completed dissertation, essential findings from the course of the project were linked and conclusions were drawn, and numerous research questions were addressed and answered.
As part of the research project, more than 15 qualification theses, 6 patent families, and more than 20 scientific publications were successfully completed and published. The core contribution of the project is the knowledge of the applicability and pitfalls of recurrent neural networks (RNN), their different cell types and architectures, in different application areas. Conclusion: The ability of the RNN family to deal with dynamics in data streams, e.g., failures, delays, and different sequence lengths in time series data, makes them indispensable in a large number of application areas today.
The project is continued within the framework of seminars at the FAU and extracurricular research activities at Fraunhofer IIS within the framework of the ADA Lovelace Center.
In 2022, time series augmentation was investigated. For this purpose, various generative methods, namely Variational Autoencoder (VAE) and Generative Adversarial Networks (GAN), were evaluated for their ability to generate time series of different application domains, e.g., features of radio signals, e.g., signal strength, channel impulse response, characteristics of GNSS spectra and multidimensional signals from inertial sensors. A novel architecture called ARCGAN was proposed, which combines all the known advantages of state-of-the-art methods and can therefore generate significantly more similar (effective) time series than the state-of-the-art.
In 2023, we investigated generative methods based on attention mechanisms, transformer architectures, and GPT with respect to their predictive performance for time series. To this end, we evaluated methods such as Legendre Units (LMU), novel transformer architectures, and TimeGPT to better forecast localization information. We could show that using appropriate input prompts and calibration, preconfigured GPT models can be adapted to new areas of application, to make the training significantly more efficient, and to also save energy.In 2024, we further examine GPT-like models for their uncertainty, explainability, and adaptability. In addition, we analyze the feasibility of these generative methods in relation to various fields of application, e.g., forecasting and anomaly detection, anomaly characterization, anomaly localization, and anomaly mitigation.
Machine Learning: Advances
Basic data
Title | Machine Learning: Advances |
---|---|
Short text | SemML-II |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.
Parallel groups / dates
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German or English |
Responsible |
Prof. Dr. Michael Philippsen Tobias Feigl |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
nach Vereinbarung - | - |
|
|||
Einzeltermin Thu, 14:00 - 15:00 | 10.10.2024 - 10.10.2024 | ||||
Blockveranstaltung+Sa Sat, 09:00 - 16:00 | 04.01.2025 - 29.03.2025 | 06.01.2025 |
Machine Learning: Introduction
Basic data
Title | Machine Learning: Introduction |
---|---|
Short text | SemML-I |
Module frequency | nur im Wintersemester |
Semester hours per week | 2 |
Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.
Parallel groups / dates
1. Parallelgruppe
Semester hours per week | 2 |
---|---|
Teaching language | German or English |
Responsible |
Prof. Dr. Michael Philippsen Tobias Feigl |
Date and Time | Start date - End date | Cancellation date | Lecturer(s) | Comment | Room |
---|---|---|---|---|---|
nach Vereinbarung - | - |
|
|||
Einzeltermin Thu, 14:00 - 15:00 | 10.10.2024 - 10.10.2024 | 11302.04.150 | |||
Blockveranstaltung+Sa Sat, 09:00 - 16:00 | 04.01.2025 - 29.03.2025 | 06.01.2025 |
2025
PDRNN: Modular Data-driven Pedestrian Dead Reckoning on Loosely Coupled Radio- and Inertial-Signalstreams
Symposium on Position Location and Navigation (PLANS) (Salt Lake City, Utah, 28.04.2025 - 01.05.2025)
In: Proc. Intl. Conf. IEEE Symposium on Position Location and Navigation (PLANS) 2025
BibTeX: Download
, , , :
Step Detection Enhanced by Anomaly Filtering
3rd International IEEE Applied Sensing Conference (APSCON) (Hyderabad, India, 20.01.2025 - 22.01.2025)
In: Proc. Intl. IEEE Applied Sensing Conference (APSCON) 2025
BibTeX: Download
, , , :
Exploitation of Hidden Context in Dynamic Movement Forecasting: A Neural Network Journey from Recurrent to Graph Neural Networks and General Purpose Transformers
IEEE/ION Position, Location and Navigation Symposium (Salt Lake City, Utah, 28.04.2025 - 01.05.2025)
In: Proc. Intl. Conf. IEEE Symposium on Position Location and Navigation (PLANS) 2025
BibTeX: Download
, , , , , , , :
2024
Non-Line-of-Sight Detection for Radio Localization using Deep State Space Models
14th International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Hong Kong, 14.10.2024 - 17.10.2024)
In: The fourteenth International Conference on Indoor Positioning and Indoor Navigation 2024 (IPIN 2024) 2024
BibTeX: Download
, , , , :
Reinforcement Learning Framework for Robust Navigation in GNSS Receivers
International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2024) (Baltimore, Maryland, 16.09.2024 - 20.09.2024)
In: Proceedings of the 37th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2024) 2024
DOI: 10.33012/2024.19853
BibTeX: Download
, , , , , :
Federated Learning with MMD-based Early Stopping for Adaptive GNSS Interference Classification
(2024)
DOI: 10.48550/arXiv.2410.15681
BibTeX: Download
, , , , , :
Achieving Generalization in Orchestrating GNSS Interference Monitoring Stations Through Pseudo-Labeling
(2024)
DOI: 10.48550/arXiv.2410.14686
BibTeX: Download
, , , :
Research Avenues for GNSS Interference Classification Robustness: Domain Adaptation, Continual Learning & Federated Learning
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) (VILNIUS, LITHUANIA, 09.09.2024 - 13.09.2024)
In: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2024
BibTeX: Download
, , :
Evaluation of (Un-)Supervised Machine Learning Methods for GNSS Interference Classification with Real-World Data Discrepancies
Intl. Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) (Baltimore, MD, 16.09.2024 - 20.09.2024)
In: Proc. Intl. Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) 2024 2024
BibTeX: Download
, , , , , , , , :
GNSS Interference Monitoring: Resilience of Machine Learning Methods on Public Real-World Datasets
In: Navigation, Journal of the Institute of Navigation 7 (2024), p. 1-20
ISSN: 0028-1522
BibTeX: Download
, , , , , , , :
Few-Shot Learning with Uncertainty-based Quadruplet Selection for Interference Classification in GNSS Data
International Conference on Localization and GNSS (Antwerp, 05.06.2024 - 07.06.2024)
In: International Conference on Localization and GNSS 2024
DOI: 10.1109/ICL-GNSS60721.2024.10578525
BibTeX: Download
, , , , , , :
Few-Shot Learning with Uncertainty-based Quadruplet Selection for Interference Classification in GNSS Data
(2024)
DOI: 10.48550/arXiv.2402.09466
BibTeX: Download
, , , , , , :
Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization
2024 14th International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Hong Kong, China, 14.10.2024 - 17.10.2024)
In: The fourteenth edition of the International Conference on Indoor Positioning and Indoor Navigation (IPIN) 2024
BibTeX: Download
, , , , :
Estimating Multipath Component Delays with Transformer Models
In: IEEE Journal of Indoor and Seamless Positioning and Navigation (2024), p. 1-10
ISSN: 2832-7322
DOI: 10.1109/JISPIN.2024.3422908
BibTeX: Download
, , , :
Uncertainty-Based Fingerprinting Model Monitoring for Radio Localization
In: IEEE Journal of Indoor and Seamless Positioning and Navigation (2024)
ISSN: 2832-7322
Open Access: https://ieeexplore.ieee.org/abstract/document/10526425
BibTeX: Download
, , , , :
Velocity-Based Channel Charting with Spatial Distribution Map Matching
In: IEEE Journal of Indoor and Seamless Positioning and Navigation (2024), p. 1-10
ISSN: 2832-7322
DOI: 10.1109/JISPIN.2024.3424768
BibTeX: Download
, , , , :
Velocity-Based Channel Charting With Spatial Distribution Map Matching
In: IEEE Journal of Indoor and Seamless Positioning and Navigation 2 (2024), p. 230-239
ISSN: 2832-7322
DOI: 10.1109/JISPIN.2024.3424768
BibTeX: Download
, , , , :
Optimal machine learning and signal processing synergies for low-resource GNSS interference detection and classification
In: IEEE Transactions on Aerospace and Electronic Systems (2024), p. 3-17
ISSN: 0018-9251
DOI: 10.1109/TAES.2023.3349360
BibTeX: Download
, , , :
2023
Evaluation of (Un-)Supervised Machine-Learning-Based Detection, Classification, and Localization Methods of GNSS Interference in the Real World
In: Proc. Intl. Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) 2023
BibTeX: Download
, , , , , , :
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression
(2023), p. 1-29
DOI: 10.48550/arXiv.2208.00919
BibTeX: Download
(Working Paper)
, , , , , , :
Multipath Delay Estimation in Complex Environments using Transformer
13th International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Nuremberg, Germany, 25.09.2023 - 28.09.2023)
In: Proc. 13th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2023) 2023
DOI: 10.1109/IPIN57070.2023.10332470
BibTeX: Download
, , , , :
Uncertainty-based Fingerprinting Model Selection for Radio Localization
13th International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Nuremberg, Germany, 25.09.2023 - 28.09.2023)
In: Proc. 13th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2023) 2023
DOI: 10.1109/IPIN57070.2023.10332531
BibTeX: Download
, , , , :
Indoor Localization with Robust Global Channel Charting: A Time-Distance-Based Approach
In: IEEE Transactions on Machine Learning in Communications and Networking 1 (2023), p. 1-15
ISSN: 2831-316X
DOI: 10.1109/TMLCN.2023.3256964
BibTeX: Download
, , , , :
Low-cost COTS GNSS interference monitoring, detection, and classification system
In: Sensors 23 (2023), p. 1-42
ISSN: 1424-8220
DOI: 10.3390/s23073452
BibTeX: Download
, , , , , , , , :
2022
Complementary Semi-Deterministic Clusters for Realistic Statistical Channel Models for Positioning
(2022), p. 1-6
DOI: 10.48550/ARXIV.2207.07837
BibTeX: Download
(Techreport)
, , , , , , , :
Towards Realistic Statistical Channel Models For Positioning: Evaluating the Impact of Early Clusters
(2022), p. 1-5
DOI: 10.48550/ARXIV.2207.07838
BibTeX: Download
(Techreport)
, , , , , , , :
Multimodal Learning for Reliable Interference Classification in GNSS Signals
Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) (Denver, CO, 19.09.2022 - 23.09.2022)
In: Proc. Intl. Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) 2022
DOI: 10.33012/2022.18586
BibTeX: Download
, , , , , , , :
Initial Results of a Low-Cost GNSS Interference Monitoring Network
Intl. Conf. on Positioning and Navigation for Intelligent Transport Systems (POSNAV) (Berlin, 03.11.2022 - 04.11.2022)
In: Intl. Conf. on Positioning and Navigation for Intelligent Transport Systems (POSNAV) 2022
DOI: 10.1109/WCNC51071.2022.9771875
BibTeX: Download
, , , , , , , :
Unsupervised Disentanglement for Post-Identification of GNSS Interference in the Wild
Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) (Denver, CO, 19.09.2022 - 23.09.2022)
In: Proc. Intl. Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) 2022
DOI: 10.33012/2022.18493
BibTeX: Download
, , , , , , , :
Position Tracking using Likelihood Modeling of Channel Features with Gaussian Processes
(2022)
DOI: 10.48550/arXiv.2203.13110
BibTeX: Download
, , , , , :
Position Tracking using Likelihood Modeling of Channel Features with Gaussian Processes
arXiv:2203.13110 [eess] (2022), p. 1-10
DOI: 10.48550/ARXIV.2203.13110
BibTeX: Download
(Techreport)
, , , , , :
Position Tracking using Likelihood Modeling of Channel Features with Gaussian Processes
(2022)
DOI: 10.48550/arXiv.2203.13110
BibTeX: Download
, , , , , :
Delay Estimation in Dense Multipath Environments using Time Series Segmentation
IEEE Wireless Communications and Networking Conference (WCNC) (Austin, Texas, 10.04.2022 - 13.04.2022)
In: Proc. Intl. Conf. IEEE Wireless Communications and Networking Conference (WCNC) 2022
DOI: 10.1109/WCNC51071.2022.9771875
BibTeX: Download
, , , , , :
Machine Learning-assisted GNSS Interference Monitoring through Crowdsourcing
Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) (Denver, CO, 19.09.2022 - 23.09.2022)
In: Proc. Intl. Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+) 2022
DOI: 10.33012/2022.18492
BibTeX: Download
, , , , , , , , :
Transfer Learning to adapt 5G AI-based Fingerprint Localization across Environments
IEEE 95th Vehicular Technology Conference (VTC-Spring) (Helsinki, 19.06.2022 - 22.06.2022)
In: Proc. Intl. Conf. IEEE Vehicular Technology Conference (VTC-Spring) 2022
DOI: 10.1109/VTC2022-Spring54318.2022.9860906
BibTeX: Download
, , , , , :
Low-cost COTS GNSS interference detection and classification platform: Initial results
Intl. Conf. on Localization and GNSS (ICL-GNSS) (Tampere, 07.06.2022 - 09.06.2022)
In: Proc. Intl. Conf. on Localization and GNSS (ICL-GNSS) 2022
DOI: 10.1109/ICL-GNSS54081.2022.9797025
BibTeX: Download
, , , , , :
2021
Accuracy-Aware Compression of Channel Impulse Responses using Deep Learning
International Conference on Indoor Positioning and Indoor Navigation (IPIN2021) (Lloret de Mar, 29.11.2021 - 02.12.2021)
In: Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN 2021) 2021
DOI: 10.1109/IPIN51156.2021.9662545
BibTeX: Download
, , , , , :
Datengetriebene Methoden zur Bestimmung von Position und Orientierung in funk‐ und trägheitsbasierter Koppelnavigation (Dissertation, 2021)
URL: https://nbn-resolving.org/urn:nbn:de:bvb:29-opus4-173550
BibTeX: Download
:
Robust ToA-Estimation using Convolutional Neural Networks on Randomized Channel Models
International Conference on Indoor Positioning and Indoor Navigation (IPIN 2021) (Lloret de Mar, Spain, 29.11.2021 - 02.12.2021)
In: Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN 2021) 2021
DOI: 10.1109/IPIN51156.2021.9662625
BibTeX: Download
, , :
Contact Tracing with the Exposure Notification Framework in the German Corona-Warn-App
2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Lloret de Mar, 29.11.2021 - 02.12.2021)
In: Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN 2021) 2021
DOI: 10.1109/IPIN51156.2021.9662591
BibTeX: Download
, , , , , , , , :
Estimating TOA Reliability with Variational Autoencoders
In: IEEE Sensors Journal (2021), p. 1-6
ISSN: 1530-437X
DOI: 10.1109/JSEN.2021.3101933
BibTeX: Download
, , , , :
2020
Real-Time Gait Reconstruction for Virtual Reality Using a Single Sensor
2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020 (Ipojuca, Pernambuco, 09.11.2020 - 13.11.2020)
In: Adjunct Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020 2020
DOI: 10.1109/ISMAR-Adjunct51615.2020.00037
BibTeX: Download
, , , :
RNN-aided Human Velocity Estimation from a Single IMU
In: Sensors 13 (2020), p. 1-31
ISSN: 1424-8220
DOI: 10.3390/s20133656
URL: https://www.mdpi.com/1424-8220/20/13/3656
BibTeX: Download
, , , , , :
Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments
15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (GRAPP 2020) (Valletta, 27.02.2020 - 29.02.2020)
In: Kadi Bouatouch, A. Augusto Sousa, Jose Braz (ed.): Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, Portugal: 2020
DOI: 10.5220/0008989903070318
URL: http://www.grapp.visigrapp.org/
BibTeX: Download
, , , , , :
ViPR: Visual-Odometry-aided Pose Regression for 6DoF Camera Localization
The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (Seattle, WA, 16.06.2020 - 18.06.2020)
In: Computer Vision Foundation (CVF) (ed.): Joint Workshop on Long-Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM 2020
DOI: 10.1109/cvprw50498.2020.00029
URL: http://openaccess.thecvf.com/content_CVPRW_2020/html/w3/Ott_ViPR_Visual-Odometry-Aided_Pose_Regression_for_6DoF_Camera_Localization_CVPRW_2020_paper.html
BibTeX: Download
, , , :
A Sense of Quality for Augmented Reality Assisted Process Guidance
International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (Ipojuca, Pernambuco, 09.11.2020 - 13.11.2020)
In: IEEE Proc. International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) 2020
DOI: 10.1109/ISMAR-Adjunct51615.2020.00046
BibTeX: Download
, , , :
2019
A Bidirectional LSTM for Estimating Dynamic Human Velocities from a Single IMU
10th International Conference on Indoor Positioning and Indoor Navigation (IPIN) (Pisa, 30.09.2019 - 03.10.2019)
In: IEEE (ed.): Proceedings of the 10th International Conference on Indoor Positioning and Indoor Navigation (IPIN) 2019
DOI: 10.1109/IPIN.2019.8911814
URL: https://www2.cs.fau.de/publication/download/IPIN2019.pdf
BibTeX: Download
, , , , , :
Sick Moves! Motion Parameters as Indicators of Simulator Sickness
In: IEEE Transactions on Visualization and Computer Graphics 25 (2019), p. 3146-3157
ISSN: 1077-2626
DOI: 10.1109/TVCG.2019.2932224
URL: https://ieeexplore.ieee.org/document/8798880
BibTeX: Download
, , , , , , , :
UWB Channel Impulse Responses for Positioning in Complex Environments: A Detailed Feature Analysis
In: Sensors 24 (2019)
ISSN: 1424-8220
DOI: 10.3390/s19245547
URL: https://www.mdpi.com/1424-8220/19/24/5547
BibTeX: Download
, , , , :
A Framework for Location-Based VR Applications
GI VR/AR Workshop 2019 (Fulda, 17.09.2019 - 18.09.2019)
In: Gesellschaft für Informatik e.V. (ed.): Virtuelle und Erweiterte Realitat: 16. Workshop der GI-Fachgruppe VR/AR (Berichte aus der Informatik) 2019
URL: https://downloads.hci.informatik.uni-wuerzburg.de/2019-gi-vr-ar-framework-for-location-based-vr-applications.pdf
BibTeX: Download
, , , , , , , :
ViPR: Visual-Odometry-aided Pose Regression for 6DoF Camera Localization
(2019)
DOI: 10.48550/arXiv.1912.08263
BibTeX: Download
, , , :
Visual-Odometry-aided Pose Regression for 6DoF Camera Localization
(2019)
URL: https://arxiv.org/pdf/1912.08263.pdf
BibTeX: Download
(Techreport)
, , , :
A Social Interaction Interface Supporting Affective Augmentation Based on Neuronal Data
Symposium on Spatial User Interaction (SUI'19) (New Orleans, 19.10.2019 - 20.10.2019)
In: Christoph W. Borst, Arun K. Kulshreshth, Gerd Bruder, Stefania Serafin, Christian Sandor, Kyle Johnsen, Jinwei Ye, Daniel Roth, Sungchul Jung (ed.): Proceedings of the Symposium on Spatial User Interaction (SUI'19), New York, NY, USA: 2019
DOI: 10.1145/3357251.3360018
URL: https://dl.acm.org/citation.cfm?id=3357251.3360018
BibTeX: Download
, , , , , :
Brain 2 Communicate: EEG-based Affect Recognition to Augment Virtual Social Interactions
Mensch und Computer 2019 (Hamburg, 08.09.2019 - 11.09.2019)
In: Gesellschaft für Informatik e.V. (ed.): Mensch und Computer 2019 - Workshopband 2019
DOI: 10.18420/muc2019-ws-571
URL: https://dl.gi.de/bitstream/handle/20.500.12116/25205/571.pdf
BibTeX: Download
, , , , , :
2018
Head-to-Body-Pose Classification in No-Pose VR Tracking Systems
25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) (Reutlingen, 18.03.2018 - 22.03.2018)
In: Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) 2018
DOI: 10.1109/VR.2018.8446495
URL: http://www2.informatik.uni-erlangen.de/publication/download/IEEE-VR2018b.pdf
BibTeX: Download
, , :
Human Compensation Strategies for Orientation Drifts
25th IEEE Conference on Virtual Reality and 3D User Interfaces (Reutlingen, 18.03.2018 - 22.03.2018)
In: Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) 2018
DOI: 10.1109/VR.2018.8446300
URL: https://www2.cs.fau.de/publication/download/IEEE-VR2018a.pdf
BibTeX: Download
, , :
Supervised Learning for Yaw Orientation Estimation
(2018)
ISSN: 2471-917X
DOI: 10.1109/IPIN.2018.8533811
URL: https://www2.cs.fau.de/publication/download/IPIN2018a.pdf
BibTeX: Download
, , :
Recurrent Neural Networks on Drifting Time-of-Flight Measurements
9th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2018) (Nantes, 24.09.2018 - 27.09.2018)
In: Proceedings of the 9th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2018) 2018
DOI: 10.1109/IPIN.2018.8533813
URL: https://www2.cs.fau.de/publication/download/IPIN2018b.pdf
BibTeX: Download
, , , , :
A Location-Based VR Museum
10th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2018) (Würzburg, 05.09.2018 - 07.09.2018)
In: Proceedings of the 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games 2018) 2018
DOI: 10.1109/VS-Games.2018.8493404
URL: http://doi.ieeecomputersociety.org/10.1109/VS-Games.2018.8493404
BibTeX: Download
, , , , , , , , :
Beyond Replication: Augmenting Social Behaviors in Multi-User Social Virtual Realities
25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) (Reutlingen, 18.03.2018 - 22.03.2018)
In: Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018) 2018
DOI: 10.1109/VR.2018.8447550
URL: https://www.hci.uni-wuerzburg.de/download/2018-ieeevr-behav-augm-preprint.pdf
BibTeX: Download
, , , , :
2017
Acoustical manipulation for redirected walking
23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17) (Gothenburg, 08.11.2017 - 10.11.2017)
In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17), New York: 2017
DOI: 10.1145/3139131.3141205
URL: https://www2.cs.fau.de/publication/download/VRST2017.pdf
BibTeX: Download
, , , :
Social Augmentations in Multi-User Virtual Reality: A Virtual Museum Experience
2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (Nantes, 09.10.2017 - 13.10.2017)
In: Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) 2017
DOI: 10.1109/ISMAR-Adjunct.2017.28
URL: http://ieeexplore.ieee.org/document/8088445/
BibTeX: Download
, , , , :
- Methods and Apparatuses for Positioning in a Wireless Communications Network (Property Right: EP3819657A1)
Inventor(s): , , , - Methods and Apparatuses for Positioning in a Wireless Communications Network (Property Right: WO2021089258A1)
Inventor(s): , , , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: CA3044140A1)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: CN110073365A)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: EP3568801A1)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: JP2020505614A)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: JP6761551B2)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: KR102207195B1)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: KR20190085974A)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: US2019346280A1)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: WO2018130446A1)
Inventor(s): , - Apparatus and Method for Efficient State Determination and Localisation Between Mobile Platforms (Property Right: WO2019197006A1)
Inventor(s): , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: CN111512269A)
Inventor(s): , , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: JP2021505898A)
Inventor(s): , , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: KR20200086332A)
Inventor(s): , , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: US2020371226A1)
Inventor(s): , , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: WO2019114925A1)
Inventor(s): , , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: CA3084206A1)
Inventor(s): , , , , , , - Method to Determine a Present Position of an Object, Positioning System, Tracker and Computer Program (Property Right: EP3724744A1)
Inventor(s): , , , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: EP3458935A1)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: JP2019519842A)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: JP6676785B2)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: KR102184619B1)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: KR20190005222A)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: WO2017198441A1)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: US2019180471A1)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: CA3022914A1)
Inventor(s): , , , , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: CN109313488A)
Inventor(s): , , , , - Method for setting a viewing direction in a representation of a virtual environment (Property Right: US10885663B2)
Inventor(s): , , , , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: CN111527465A)
Inventor(s): , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: EP3732549A1)
Inventor(s): , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: JP2021508886A)
Inventor(s): , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: KR20200100160A)
Inventor(s): , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: US2020334837A1)
Inventor(s): , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: WO2019129355A1)
Inventor(s): , - Method for Predicting a Motion of an Object, Method for Calibrating a Motion Model, Method for Deriving a Predefined Quantity and Method for Generating a Virtual Reality View (Property Right: CA3086559A1)
Inventor(s): , - Apparatuses and Methods for Correcting Orientation Information From One or More Inertial Sensors (Property Right: DE102017100622A1)
Inventor(s): , - Method for Setting a Viewing Direction in a Representation of a Virtual Environment (Property Right: DE102016109153A1)
Inventor(s): , , , ,