Awful
For instance: if your assignment is to create a model and generate code for it, the solution he will provide is just a table for the outputs where you do not really get anything. I talked to him in his office about my question (he only demonstrates how to use simulink once) and he said: it is your responsibility.
Poor
Everything about Prof. Rapos is too much! It's such a struggle to understand him because he talks really fast. He needs to chill.
Miami University Oxford - Computer Science
Assistant Professor at Miami University
Higher Education
Eric
Rapos
Oxford, Ohio
Experienced academic with a demonstrated history of working in the higher education industry. Skilled in software engineering and development. Strong professional with a Doctor of Philosophy (PhD) focused in Computer Science from Queen's University.
PhD Student- Research Asistant
Research in MDE - Impact Analysis for Simulink Models
MSc Student - Research Assistant
Working under the supervision of Dr Juergen Dingel, in the area of Model Driven Development. My work focuses on Incremental Test Case Generation.
CIC Officer
Instructor with the Air Cadet Squadron to teach and develop in youth the qualities and aims of the program.
General Training Flight Commander
Responsible for instructing first year cadets on the introductory classes for summer training, while familiarizing them with summer training life.
Introduction to Leadership Course Flight Instructor
Responsible for instructing two different intakes of junior leadership cadets in the field of leadership and team building.
Assistant Professor
Eric worked at Miami University as a Assistant Professor
MSc
Computer Science
Completed courses in software language history, tools for software modeling, data mining, and research methods. Following completion of course work, my focus was on research in the incremental generation of test cases for UML-RT Models. The degree culminated in the presentation and defense of a thesis entitled "Understanding the Effects of Model Evolution through Incremental Test Case Generation for UML-RT Models".
Doctor of Philosophy (PhD)
Computer Science
Completed courses in professional development, ultra-large scale software systems, game design, and unconventional computing. Preliminary research is focused on model-based software testing, specifically on the co-evolution of tests alongside software models.
BCmpH - SSP
Software Design
Agnes Benidickson Tricolour Award
The Agnes Benidickson Tricolour Award and induction in the Tricolour Society is the highest tribute that can be paid to a student for valuable and distinguished service to the University in non-athletic, extra-curricular activities. Such service may have been confined to a single field, or it may have taken the form of a significant contribution over a wide range of activities. The award is named after Dr. Agnes Benidickson who was Chancellor of Queens University from 1980 until 1996. Admission to the Tricolour Society shall be limited to students of the University. Although the number of students to be admitted to the Society each year shall be decided by the selection committee, the number shall be limited so as not to jeopardize the distinction of the Tricolour Society. Admission shall not be granted simply because a person holds or has held a certain position or office on campus. The Rector serves as the Chair of the Tricolour Award Selection Committee. The selection process follows the procedures as outlined in the Award's Terms of Reference. The names of the Award's recipients are engraved on a plaque in the Students' Memorial Union portion of the John Deustch University Centre.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The concept of co-evolution refers to two (or more) objects evolving alongside each other, such that there is a relationship between the two that must be maintained. In the field of co-evolution of model-based tests, this refers to the the tests and test models evolving alongside the source models, such that the tests and test models remain correct for testing the source models. Previous work centered largely on the iterative development aspect of Model-Based Testing (MBT), however further attention is needed on the prolonged maintenance of model-based tests after initial release.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model-based test co-evolution of tests alongside source models and presenting results to test engineers.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The concept of co-evolution refers to two (or more) objects evolving alongside each other, such that there is a relationship between the two that must be maintained. In the field of co-evolution of model-based tests, this refers to the the tests and test models evolving alongside the source models, such that the tests and test models remain correct for testing the source models. Previous work centered largely on the iterative development aspect of Model-Based Testing (MBT), however further attention is needed on the prolonged maintenance of model-based tests after initial release.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model-based test co-evolution of tests alongside source models and presenting results to test engineers.
Queen's University (Masters Thesis)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of real-time and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is the focus of much academic and industrial research. However given the iterative nature of MDD, the natural evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. The focus of this research was to achieve an improved understanding of the impact of typical model evolution steps on both the execution of the model and its test cases, and how this impact can be mitigated by reusing previously generated test cases. In this thesis we use existing techniques for symbolic execution and test case generation to perform an analysis on example models and determine how evolution affects model artifacts; these findings were then used to classify evolution steps based on their impact. From these classifications, we were able to determine exactly how to perform updates to existing symbolic execution trees and test suites in order to obtain the resulting test suites using minimal computational resources whenever possible. The approach was implemented in a software plugin, IncreTesCaGen, that is capable of incrementally generating test cases for a subset of UML-RT models by leveraging the existing testing artifacts (symbolic execution trees and test suites), as well as presenting additional analysis results to the user. Finally, we present the results of an initial evaluation of our tool, which provides insight into the tool’s performance, the effects of model evolution on execution and test case generation, as well as design tips to produce optimal models for evolution.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The concept of co-evolution refers to two (or more) objects evolving alongside each other, such that there is a relationship between the two that must be maintained. In the field of co-evolution of model-based tests, this refers to the the tests and test models evolving alongside the source models, such that the tests and test models remain correct for testing the source models. Previous work centered largely on the iterative development aspect of Model-Based Testing (MBT), however further attention is needed on the prolonged maintenance of model-based tests after initial release.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model-based test co-evolution of tests alongside source models and presenting results to test engineers.
Queen's University (Masters Thesis)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of real-time and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is the focus of much academic and industrial research. However given the iterative nature of MDD, the natural evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. The focus of this research was to achieve an improved understanding of the impact of typical model evolution steps on both the execution of the model and its test cases, and how this impact can be mitigated by reusing previously generated test cases. In this thesis we use existing techniques for symbolic execution and test case generation to perform an analysis on example models and determine how evolution affects model artifacts; these findings were then used to classify evolution steps based on their impact. From these classifications, we were able to determine exactly how to perform updates to existing symbolic execution trees and test suites in order to obtain the resulting test suites using minimal computational resources whenever possible. The approach was implemented in a software plugin, IncreTesCaGen, that is capable of incrementally generating test cases for a subset of UML-RT models by leveraging the existing testing artifacts (symbolic execution trees and test suites), as well as presenting additional analysis results to the user. Finally, we present the results of an initial evaluation of our tool, which provides insight into the tool’s performance, the effects of model evolution on execution and test case generation, as well as design tips to produce optimal models for evolution.
Proc. MiSE 2016, 8th International Workshop on Modelling in Software Engineering
This paper presents an industrial case study that explores the co-evolution relationship between Matlab Simulink Models and their associated test suites. Through an analysis of differences between releases of both the models and their tests, we are able to determine what the relation between the model evolution and test evolution is, or if one exists at all. Using this comparison methodology, we present empirical results from a production system of 64 Matlab Simulink Models evolving over 9 releases. In our work we show that in this system there is a strong co-evolution relationship (a correlation value of r = 0.9, p < 0.01) between the models and tests, and we examine the cases where the relationship does not exist. We also pose, and answer, three specific research questions about the practices of development and testing over time for the system under study.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The concept of co-evolution refers to two (or more) objects evolving alongside each other, such that there is a relationship between the two that must be maintained. In the field of co-evolution of model-based tests, this refers to the the tests and test models evolving alongside the source models, such that the tests and test models remain correct for testing the source models. Previous work centered largely on the iterative development aspect of Model-Based Testing (MBT), however further attention is needed on the prolonged maintenance of model-based tests after initial release.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model-based test co-evolution of tests alongside source models and presenting results to test engineers.
Queen's University (Masters Thesis)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of real-time and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is the focus of much academic and industrial research. However given the iterative nature of MDD, the natural evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. The focus of this research was to achieve an improved understanding of the impact of typical model evolution steps on both the execution of the model and its test cases, and how this impact can be mitigated by reusing previously generated test cases. In this thesis we use existing techniques for symbolic execution and test case generation to perform an analysis on example models and determine how evolution affects model artifacts; these findings were then used to classify evolution steps based on their impact. From these classifications, we were able to determine exactly how to perform updates to existing symbolic execution trees and test suites in order to obtain the resulting test suites using minimal computational resources whenever possible. The approach was implemented in a software plugin, IncreTesCaGen, that is capable of incrementally generating test cases for a subset of UML-RT models by leveraging the existing testing artifacts (symbolic execution trees and test suites), as well as presenting additional analysis results to the user. Finally, we present the results of an initial evaluation of our tool, which provides insight into the tool’s performance, the effects of model evolution on execution and test case generation, as well as design tips to produce optimal models for evolution.
Proc. MiSE 2016, 8th International Workshop on Modelling in Software Engineering
This paper presents an industrial case study that explores the co-evolution relationship between Matlab Simulink Models and their associated test suites. Through an analysis of differences between releases of both the models and their tests, we are able to determine what the relation between the model evolution and test evolution is, or if one exists at all. Using this comparison methodology, we present empirical results from a production system of 64 Matlab Simulink Models evolving over 9 releases. In our work we show that in this system there is a strong co-evolution relationship (a correlation value of r = 0.9, p < 0.01) between the models and tests, and we examine the cases where the relationship does not exist. We also pose, and answer, three specific research questions about the practices of development and testing over time for the system under study.
Proc. ICSME 2017, IEEE 33rd International Conference on Software Maintenance and Evolution
Abstract: With the increasing use of Simulink modeling in embedded system development, there comes a need for effective techniques and tools to support managing these models and their related artifacts. Because maintenance of models makes up such a large portion of the cost and effort of the system as a whole, it is increasingly important to ensure that the process of managing models is as simple, intuitive and efficient as possible. Part of model management comes in the form of impact analysis - the ability to determine the impact of a change to a model on related artifacts such as test cases and other models. This paper presents an approach to impact analysis for Simulink models, and a tool to implement it (SimPact). We validate our tool as an impact predictor against the maintenance history of a large set of industrial models and their tests. The results show a high level of both precision and recall in predicting actual impact of model changes on tests.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The concept of co-evolution refers to two (or more) objects evolving alongside each other, such that there is a relationship between the two that must be maintained. In the field of co-evolution of model-based tests, this refers to the the tests and test models evolving alongside the source models, such that the tests and test models remain correct for testing the source models. Previous work centered largely on the iterative development aspect of Model-Based Testing (MBT), however further attention is needed on the prolonged maintenance of model-based tests after initial release.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model-based test co-evolution of tests alongside source models and presenting results to test engineers.
Queen's University (Masters Thesis)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of real-time and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is the focus of much academic and industrial research. However given the iterative nature of MDD, the natural evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. The focus of this research was to achieve an improved understanding of the impact of typical model evolution steps on both the execution of the model and its test cases, and how this impact can be mitigated by reusing previously generated test cases. In this thesis we use existing techniques for symbolic execution and test case generation to perform an analysis on example models and determine how evolution affects model artifacts; these findings were then used to classify evolution steps based on their impact. From these classifications, we were able to determine exactly how to perform updates to existing symbolic execution trees and test suites in order to obtain the resulting test suites using minimal computational resources whenever possible. The approach was implemented in a software plugin, IncreTesCaGen, that is capable of incrementally generating test cases for a subset of UML-RT models by leveraging the existing testing artifacts (symbolic execution trees and test suites), as well as presenting additional analysis results to the user. Finally, we present the results of an initial evaluation of our tool, which provides insight into the tool’s performance, the effects of model evolution on execution and test case generation, as well as design tips to produce optimal models for evolution.
Proc. MiSE 2016, 8th International Workshop on Modelling in Software Engineering
This paper presents an industrial case study that explores the co-evolution relationship between Matlab Simulink Models and their associated test suites. Through an analysis of differences between releases of both the models and their tests, we are able to determine what the relation between the model evolution and test evolution is, or if one exists at all. Using this comparison methodology, we present empirical results from a production system of 64 Matlab Simulink Models evolving over 9 releases. In our work we show that in this system there is a strong co-evolution relationship (a correlation value of r = 0.9, p < 0.01) between the models and tests, and we examine the cases where the relationship does not exist. We also pose, and answer, three specific research questions about the practices of development and testing over time for the system under study.
Proc. ICSME 2017, IEEE 33rd International Conference on Software Maintenance and Evolution
Abstract: With the increasing use of Simulink modeling in embedded system development, there comes a need for effective techniques and tools to support managing these models and their related artifacts. Because maintenance of models makes up such a large portion of the cost and effort of the system as a whole, it is increasingly important to ensure that the process of managing models is as simple, intuitive and efficient as possible. Part of model management comes in the form of impact analysis - the ability to determine the impact of a change to a model on related artifacts such as test cases and other models. This paper presents an approach to impact analysis for Simulink models, and a tool to implement it (SimPact). We validate our tool as an impact predictor against the maintenance history of a large set of industrial models and their tests. The results show a high level of both precision and recall in predicting actual impact of model changes on tests.
Queen's University Thesis
With the increasing use of Simulink modeling in embedded system development, there comes a need for effective techniques and tools to support managing these models and their related artifacts. Because maintenance of models, like source code, makes up such a large portion of the cost and effort of the system as a whole, it is increasingly important to ensure that the process of managing models is as simple, intuitive and efficient as possible. By examining the co-evolution patterns of Simulink models and their respective test cases (a useful modeling artifact), it is possible to gain an understanding of how these systems evolve over time, and what the impact of changes to a model are on the relevant test cases. This analysis uncovered opportunities to present useful findings to developers in order to effectively manage model changes. By tracing the impact of a change to a Simulink model block on both the surrounding blocks and the tests associated with the model, developers can ensure that changes are accurately propagated, and can avoid changes that would lead to inconsistencies. To support the model management process, three tools have been produced, each addressing a different aspect of the maintenance process: SimPact is used to identify and highlight the impact of changes to model blocks on tests and the rest of the model, SimTH automatically generates test harnesses for Simulink models, and SimEvo combines these tools into a comprehensive evolution support package, with the ability to interface with existing industry tools. Each of these tools has been evaluated against a large industrial model set, and some are already in current use in industry, demonstrating their effectiveness and applicability to real world problems.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The relative ease of test case generation associated with model-based testing can lead to an increased number of test cases being identified for any given system; this is problematic as it is becoming near impossible to run (or even generate) all of the possible tests in available time frames. Test case prioritization is a method of ranking the tests in order of importance, or priority based on criteria specific to a domain or implementation, and selecting some subset of tests to generate and run. Some approaches require the generation of all tests, and simply prioritize the ones to be run, however we propose an approach that prevents unnecessary generation of tests through the use of symbolic execution trees to determine which tests provide the most benefit to coverage of execution. Our approach makes use of fuzzy logic, specifically fuzzy control systems, to prioritize test cases that are generated from these execution trees; the prioritization is based on natural language rules about testing priority. Within this paper we present our motivation, some background research, our methodology and implementation, results, and conclusions.
IEEE Fifth International Conference on Software Testing, Verification and Validation (ICST 2012)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of realtime and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is a large area of focus in academic and industrial research. However given the iterative nature of MDD, the evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. Thus, it is our goal to achieve an improved understanding of the impact of typical state machine evolution steps on test cases, and how this impact can be mitigated by reusing previously generated test cases. We are also aiming to implement this in a software prototype to automate and evaluate our work.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
This paper presents a semi-automated framework for identifying and representing different kinds of variability in Simulink models. Based on the observed variants found in similar subsystem patterns inferred using Simone, a text-based model clone detection tool, we propose a set of variability operators for Simulink models. By applying these operators to six example systems, we are able to represent the variability in their similar subsystem patterns as a single subsystem template directly in the Simulink environment. The product of our framework is a single consolidated subsystem model capable of expressing the observed variability across all instances of each inferred pattern. The process of pattern inference and variability analysis is largely automated and can be easily applied to other collections of Simulink models. The framework is aimed at providing assistance to engineers to identify, understand, and visualize patterns of subsystems in a large model set. This understanding may help in reducing maintenance effort and bug identification at an early stage of the software development.
Proc. ICST 2015, 8th International Conference on Software Testing, Verification and Validation
The concept of co-evolution refers to two (or more) objects evolving alongside each other, such that there is a relationship between the two that must be maintained. In the field of co-evolution of model-based tests, this refers to the the tests and test models evolving alongside the source models, such that the tests and test models remain correct for testing the source models. Previous work centered largely on the iterative development aspect of Model-Based Testing (MBT), however further attention is needed on the prolonged maintenance of model-based tests after initial release.
30th International Conference on Software Maintenance and Evolution (ICSME 2014)
Model-based software is evolving at an increasing rate, and this has an impact on model-based test suites, often causing unnecessary regeneration of tests. Our work proposes that by examining evolution patterns of Simulink automotive models and their associated test models we can identify the direct impacts of evolution on the tests. Using these evolution patterns, we propose the design of a process to ensure that as a Simulink model evolves its associated test models are automatically adapted, requiring minimal computation. This will lead to the development of a prototype tool capable of performing this model-based test co-evolution of tests alongside source models and presenting results to test engineers.
Queen's University (Masters Thesis)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of real-time and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is the focus of much academic and industrial research. However given the iterative nature of MDD, the natural evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. The focus of this research was to achieve an improved understanding of the impact of typical model evolution steps on both the execution of the model and its test cases, and how this impact can be mitigated by reusing previously generated test cases. In this thesis we use existing techniques for symbolic execution and test case generation to perform an analysis on example models and determine how evolution affects model artifacts; these findings were then used to classify evolution steps based on their impact. From these classifications, we were able to determine exactly how to perform updates to existing symbolic execution trees and test suites in order to obtain the resulting test suites using minimal computational resources whenever possible. The approach was implemented in a software plugin, IncreTesCaGen, that is capable of incrementally generating test cases for a subset of UML-RT models by leveraging the existing testing artifacts (symbolic execution trees and test suites), as well as presenting additional analysis results to the user. Finally, we present the results of an initial evaluation of our tool, which provides insight into the tool’s performance, the effects of model evolution on execution and test case generation, as well as design tips to produce optimal models for evolution.
Proc. MiSE 2016, 8th International Workshop on Modelling in Software Engineering
This paper presents an industrial case study that explores the co-evolution relationship between Matlab Simulink Models and their associated test suites. Through an analysis of differences between releases of both the models and their tests, we are able to determine what the relation between the model evolution and test evolution is, or if one exists at all. Using this comparison methodology, we present empirical results from a production system of 64 Matlab Simulink Models evolving over 9 releases. In our work we show that in this system there is a strong co-evolution relationship (a correlation value of r = 0.9, p < 0.01) between the models and tests, and we examine the cases where the relationship does not exist. We also pose, and answer, three specific research questions about the practices of development and testing over time for the system under study.
Proc. ICSME 2017, IEEE 33rd International Conference on Software Maintenance and Evolution
Abstract: With the increasing use of Simulink modeling in embedded system development, there comes a need for effective techniques and tools to support managing these models and their related artifacts. Because maintenance of models makes up such a large portion of the cost and effort of the system as a whole, it is increasingly important to ensure that the process of managing models is as simple, intuitive and efficient as possible. Part of model management comes in the form of impact analysis - the ability to determine the impact of a change to a model on related artifacts such as test cases and other models. This paper presents an approach to impact analysis for Simulink models, and a tool to implement it (SimPact). We validate our tool as an impact predictor against the maintenance history of a large set of industrial models and their tests. The results show a high level of both precision and recall in predicting actual impact of model changes on tests.
Queen's University Thesis
With the increasing use of Simulink modeling in embedded system development, there comes a need for effective techniques and tools to support managing these models and their related artifacts. Because maintenance of models, like source code, makes up such a large portion of the cost and effort of the system as a whole, it is increasingly important to ensure that the process of managing models is as simple, intuitive and efficient as possible. By examining the co-evolution patterns of Simulink models and their respective test cases (a useful modeling artifact), it is possible to gain an understanding of how these systems evolve over time, and what the impact of changes to a model are on the relevant test cases. This analysis uncovered opportunities to present useful findings to developers in order to effectively manage model changes. By tracing the impact of a change to a Simulink model block on both the surrounding blocks and the tests associated with the model, developers can ensure that changes are accurately propagated, and can avoid changes that would lead to inconsistencies. To support the model management process, three tools have been produced, each addressing a different aspect of the maintenance process: SimPact is used to identify and highlight the impact of changes to model blocks on tests and the rest of the model, SimTH automatically generates test harnesses for Simulink models, and SimEvo combines these tools into a comprehensive evolution support package, with the ability to interface with existing industry tools. Each of these tools has been evaluated against a large industrial model set, and some are already in current use in industry, demonstrating their effectiveness and applicability to real world problems.
Proc. SCAM 2015, 15th International Working Conference on Source Code Analysis and Manipulation
SimNav is a graphical user interface designed for displaying and navigating clone classes of Simulink models detected by the model clone detector Simone. As an embedded Simulink interface tool, SimNav allows model developers to explore detected clones directly in their own model development environment rather than a separate research tool interface. SimNav allows users to open selected models for side-by-side comparison, in order to visually explore clone classes and view the differences in the clone instances, as well as to explore the context in which the clones exist. This tool paper describes the motivation, implementation, and use cases for SimNav.