[Modelinterpreter] Fwd: MODELS 2015 MEIP track Paper Notification [37]

Dévai Gergely deva at elte.hu
Sat Jul 11 00:12:10 CEST 2015


A fo" konferencia egyik workshopja: 1st International Workshop on 
Executable Modeling
Ide lehetne beküldeni a 6 oldalasra rövidített változatát, 17-dike a 
határido".
Gergo"

On 07/10/2015 11:43 PM, Dévai Gergely wrote:
> Ezt nem, de a másik projektben írt modellezo"s cikket legalább 
> elfogadták... :-/
>
>
> -------- Original Message --------
> Subject: 	MODELS 2015 MEIP track Paper Notification [37]
> Date: 	Fri, 10 Jul 2015 23:38:28 +0200
> From: 	Alexander Egyed <alexander.egyed at gmail.com>
> Reply-To: 	<alexander.egyed at gmail.com>
> To: 	<deva at inf.elte.hu>, <kmate at caesar.elte.hu>, 
> <nboldi at caesar.elte.hu>, <Gabor.Batori at ericsson.com>, 
> <kitlei at elte.hu>, <kto at elte.hu>
> CC: 	<alexander.egyed at gmail.com>, 
> <models2015-meippapers-webadmin at borbala.com>
>
>
>
> Dear Gergely, Máté, Boldizsár, Gábor, Róbert and Tamás,
>
> Thank you for your submission to the MODELS 2015 MDE in Practice Track.
> We regret to inform you that your submission:
>
> "Design Space Exploration for High Performance UML Model Execution"
>
> was not accepted for publication in the conference proceedings.
>
> Each paper was reviewed by at least three members of the Program Committee
> (PC) and the reviews were monitored by the Program Board (PB). Each paper was
> also extensively discussed during the online PC meeting, and due consideration
> was given to author responses that were provided. On July 7-8, 2015, a PB
> meeting was held in Barcelona, which all PB members had to attend. At that
> meeting, the paper selection was finalized. This year, out of 40 papers
> submitted to MDE in Practice track, the PC and the PB accepted 11 (27%).
>
> The reviews for your paper can be found at the end of this message and we hope
> that you will find them useful.
>
> We would like to encourage you to consider attending and participating in
> MODELS 2015 and sincerely hope to see you in Ottawa this fall.
>
> Best regards,
>
> Alexander Egyed & Jordi Cabot
> PC Chairs, MODELS 2015
>
> *=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=
>
> First reviewer's review:
>
>            >>> Summary of the submission <<<
>
> The paper first presents the requirements for industrial scale model executor
> followed by the evaluation of the state-of-the-art model executors. Several
> requirements are identified such as Interpreters vs Code Generator vs JIT
> compilation and Java vs C++. Based on the findings, an open source model
> execution chain is being developed together with partnership of Ericsson and
> academic partners.
>
>            >>> Evaluation <<<
>
> The paper in general is difficult to follow. There are some writing issues and
> typos; some of them are listed in the minor comments section.
>
> Section II focuses on the requirements of model executors. However, it is not
> clear how were these requirements identified? Were these requirements
> identified based on collaboration between industry and academic partners or
> were these identified from the state-of-the-art?
>
> Section III presents the related work on the existing tools. Strange enough
> only performance of BridgePoint was evaluated and not of the others. Was there
> any specific reason for this? Another question is why were these tools
> selected? Was any systematic search performed to select the tools?
>
> Section IV is called âEURoeDesign Space ExplorationâEUR? and in the section actually
> the paper discusses various features of the tools that are apparently
> identified from Section IIIâEUR^(TM)s survey on the tools. First, the heading is not
> correctly representing what is contained in the section. Second, it is not
> clear that how these features are identified from Section IIIâEUR^(TM)s tools
> description. In the rest of the section, various features are compared with
> experiments. For example, Interpreter vs Generated Code vs JIT to Java, and JIT
> to byte code in terms of time performance. The major issue in the paper is
> missing design of the experiments. What research questions paper is trying to
> answer? How are requirements linked to features? A table summarising research
> questions, requirements, features, results would have made everything connected
> and easy to follow.
>
> Section V presents the proposed architecture. Once again a clear link from the
> results of Section IV is missing. In addition, there is no experiment to prove
> that the proposed architecture solve all the problems of current
> state-of-the-art. It is perhaps due to the reason that it is an ongoing work
> and it might be a good idea to resubmit the paper in the later stage of the
> project.
>
> Minor:
> Several places âEURoe:âEUR? is used instead of âEURoe.âEUR?
> Figure 5 caption is missing. It only says (a), (b) and (c)
>
> *=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*
>
> Second reviewer's review:
>
>            >>> Summary of the submission <<<
>
> After stating industrial requirements for a model execution/debugging/testing
> tool chain in the context of Executable UML, the authors provide a survey on
> such tools, e.g., BridgePoint, Alf/fUML, Moka/Moliz, Topcased, and Yakindu.
> They discuss why existing tools fall short in addressing these requirements.
> Then, the authors explore the design space for such tools by discussing
> multiple design alternatives for various decision points and providing
> prototyping results to support their claims. The paper concludes with the
> proposal for an architecture for a model execution platform based on the
> findings in the design space exploration.
>
>            >>> Evaluation <<<
>
> This paper can be helpful for developers of model execution tools. The authors
> provide a comprehensive manual design space exploration and reason about
> different design alternatives. The paper can be classified as an experience
> report. The stated requirements are reasonable and are potentially useful for
> other companies as well. Clearly, this paper is work-in-progress, as the
> prototype the authors working on is not finished. Thus, no case studies can be
> reported yet. Nevertheless, I think the paper is well in the scope of the "MDE
> in Practice" track and can potentially be a useful source of information for
> developers of model execution tools.
>
> *=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*
>
> Third reviewer's review:
>
>            >>> Summary of the submission <<<
>
> This paper describes an effort between university researchers and industry to
> develop a tool to efficiently execute UML models for large scale design efforts.
>
>            >>> Evaluation <<<
>
> This is a good example of industry/academia partnerships. The paper is
> generally well written and gives a good description of the goals, process, and
> initial results.
>
> My one concern with the paper (actually the work itself) is that tools already
> exists that will simulate UML models (e.g. Rhapsody and Magic Draw). Further,
> these tools are more compliant with the base UML standard than Bridgepoint. I
> would have liked to see this addressed in the paper. Given these tools, what is
> the real contribution here? Just providing an open source solution? (Given the
> cost of Rhapsody and Magic Draw, this is not a trivial contribution, but if
> that's the only contribution, then it should be made clear and there should be
> a comparison with these tools.)
>
> *=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*=--=*
>
>
>
>
>
> _______________________________________________
> Modelinterpreter mailing list
> Modelinterpreter at plc.inf.elte.hu
> https://plc.inf.elte.hu/mailman/listinfo/modelinterpreter

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://plc.inf.elte.hu/pipermail/modelinterpreter/attachments/20150711/7a99998f/attachment-0001.html>


More information about the Modelinterpreter mailing list