Apr 252016
 

2nd International Workshop on Executable Modeling

http://www.modelexecution.org/exe2016

October, 2016, Saint Malo, France

co-located with MODELS 2016

We are pleased to announce the second edition of the International Workshop on Executable Modeling (EXE 2016), which will be held in conjunction with the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS) in October 2016 at Saint Malo, France.

EXE is devoted to the topic of executable modeling-a technique that promises major advanced in the development of complex software-intensive systems. In this workshop, we want to provide a forum for intensively discussing challenges and potentials related to executable modeling, and for exchanging recent research results, ideas, opinions, requirements, and experiences in executable modeling.

You can find additional information on the workshop including the call for papers at the EXE 2016 website.

Apr 202015
 

1st International Workshop on Executable Modeling

http://www.modelexecution.org/exe2015

September 27th, 2015, Ottawa, Canada

co-located with MODELS 2015

We are pleased to announce the First International Workshop on Executable Modeling (EXE 2015), held in conjunction with the ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS).

The objective of EXE 2015 is to draw attention to the potentials and challenges of executable modeling and advance the state-of-the-art in executable modeling. It aims at bringing together researchers working on the development of executable modeling languages and model execution tools, as well as practitioners developing or applying executable modeling languages for building software systems. The workshop intends to provide a forum for exchanging recent results, ideas, opinions, and experiences in executable modeling.

Have a look at the EXE 2015 website for additional information on the scope of the workshop, call for papers, important dates, and committees.

 

In MDE, quality of models is an important issue as models constitute the central artifacts in the development process. When executable models are employed, it is possible to validate their functional correctness by applying model testing where the model under test is executed and different properties of the carried out execution are validated. Unfortunately, systematic testing approaches for models are rare.

We developed a testing framework for UML models based on the fUML standard which provides a virtual machine for executing UML activities. This testing framework comprises a test specification language for expressing assertions on the execution behavior of UML activities as well as a test interpreter for evaluating them.

More details on our testing framework for UML can be found at http://www.modelexecution.org/?page_id=524.

ATM Example – New Version of the Test Specification Language

We introduced several new features into our test specification language, which are illustrated on the example of an ATM system. The example and descriptions of the provided features can be found at http://www.modelexecution.org/?page_id=544.

User Study

For evaluating the ease of use and usefulness of our testing framework, we have performed a user study with eleven participants. The results of the user study and other related materials be found at http://www.modelexecution.org/?page_id=1184.

 

Model differencing is concerned with identifying differences among models and constitutes an important prerequisite to efficiently carry out development and change management tasks in model-driven engineering. While most of the existing model differencing approaches focus on identifying differences on the abstract syntax level, we propose to reason about differences on the semantics level. Thereby, we utilize the behavioral semantics specification of the used modeling language, which enables the execution of the compared models, to reason about semantic differences.

Further details about our approach, our implementation based on the semantics specification language xMOF, as well as examples can be found at http://www.modelexecution.org/?page_id=1118.

 

We analyzed a set of 121 open UML models regarding the usage frequency of the sublanguages and modeling concepts provided by UML 2.4.1, as well as of UML profiles.

The analyzed models have been created with the UML modeling tool Enterprise Architect, and have been retrieved from the Web.

The following three research questions have been investigated in this study:

  1. What is the usage frequency of UML’s sublanguages?
  2. What is the usage frequency of UML’s modeling concepts?
  3. What is the usage frequency of UML profiles?

Information about the analyzed data set, the analysis process, as well as the results of the analysis are provided at: http://www.modelexecution.org/?page_id=982.

The paper titled On the Usage of UML: Initial Results of Analyzing Open UML Models was accepted for publication at the conference Modellierung 2014 which will take place in March at Vienna.

 

On Thursday, October 3rd 2013, we will give a tool demonstration of xMOF at MoDELS 2013. In this tool demonstration you will learn

  • how you can specify the behavioral semantics of your modeling languages with xMOF and
  • how you can execute your models according to this specification.

In the tool demonstration we will show you our tool support implemented for the Eclipse Modeling Framework based on a simple Petri Net modeling language. You can download an Eclipse bundle with xMOF readily installed as well as the used example at http://www.modelexecution.org/media/201309_xmofdemo_eclipsebundle.

The following tutorial video provides you a step-by-step guide for specifying the semantics of the Petri Net modeling language and for executing a simple Petri Net model: http://www.youtube.com/watch?v=2y1-xpfK-_Q

 

When estimating the quality of an application, usually non-functional properties (NFPs) such as utilization or throughput are monitored. Many different analysis and simulation approaches have been proposed to allow an early stage performance analysis on UML model-level. Due to the lack of formal execution semantics of UML, however, many of these approaches transform the UML software model into a dedicated performance model such as queuing networks. This transformation can introduce additional complexity for the user and developers.

We therefore used the previously presented model-based analysis framework on fUML to develop a performance analyzer that can analyze the software model directly. This analyzer executes a number of modeled workload scenarios and performs operational analysis to calculate different performance properties. The workload scenarios represent expected interactions with the software that need to be analyzed.
Classes in the software model can be declared as service centers, which provide different operations that can be requested by a workload scenario. A workload pattern, e.g., Poisson arrival, defines how often a specified scenario is executed. The requests resulting from multiple executions compete for the service centers and need to wait if the service center is busy with another request (resource contention). This temporal overlap hampers the performance of the overall software and is reflected in the different performance properties. Furthermore, our performance analyzer is able to consider multiple instances of the same service center and supports different balancing strategies as well as different dynamic horizontal scaling strategies.

Further details as well as a case study and the related sources can be found at http://www.modelexecution.org/?page_id=204#resource_contention

 

In MDE, quality of models is an important issue as models constitute the central artifacts in the development process. When executable models are employed, it is possible to validate their functional correctness by applying model testing where the model under test is executed and different properties of the carried out execution are validated. Unfortunately, systematic testing approaches for models are rare.

We developed a testing framework for UML models based on the fUML standard which provides a virtual machine for executing UML activities. This testing framework comprises a test specification language for expressing assertions on the execution behavior of UML activities as well as a test interpreter for evaluating them.

More details on our testing framework for UML can be found at http://www.modelexecution.org/?page_id=524

 

Recently we investigated the applicability of fUML for specifying the behavioral semantics of domain-specific modeling languages. For this purpose we propose the integration of fUML with MOF into a new metamodeling language xMOF (eXecutable MOF) that allows to specify both the abstract syntax and the behavioral semantics of domain-specific modeling languages.

We developed an implementation of xMOF for the Eclipse Modeling Framework by integrating fUML with Ecore as well as an implementation of an accompanying methodology for defining executable domain-specific modeling languages and for executing domain-specific models based on the language’s semantics specification.

More details on xMOF can be found at http://www.modelexecution.org/?page_id=250

 

Considering non-functional properties of a software system early in the development process is crucial for guaranteeing that non-functional requirements will be fulfilled by the system under development.

We developed a model-based analysis framework based on fUML for enabling the implementation of model-based analysis tools. This framework enables to carry out model-based analysis of non-functional properties of a software system based on runtime information in the form of traces obtained by executing UML models using the fUML virtual machine. Therefore, the framework integrates UML profile applications with execution traces to enable the consideration of additional information captured in profile applications in the model-based analysis as required for instance in performance analysis.

More details on this topic can be found at http://www.modelexecution.org/?page_id=204

This work is done in collaboration with University of L’Aquila.

SEALAB Quality Group - University of L'Aquila

© 2012 moliz by Vienna University of Technology Suffusion theme by Sayontan Sinha