CharlieShen

新人,大家多关照啦@_@

  博客中心 :: 首页 :: 新随笔 :: 联系 :: 聚合  :: 登录 ::
  3994 随笔 :: 0 文章 :: 20 评论 :: 0 Trackbacks
Cached @ 2025/4/29 0:25:03Control ASP.skins_cogitation_controls_blogstats_ascx
<2007年9月>
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456

留言簿(14)

随笔档案

文章档案

搜索

最新评论

阅读排行榜

评论排行榜

Cached @ 2025/4/29 0:25:03Control ASP.skins_cogitation_controls_singlecolumn_ascx

Refining Transformation Results

An abstract, portable language cannot expose all features of all of its platforms to which it is mapped. In some cases this is accepted by the language’s user community. In other cases, particularly the connoisseur of the target platform will want to refine the results of mapping an abstract model to “their” target platform. Some languages don’t even intend to be complete in the sense of not requiring any lower-level refinement. For example, if a UML model is used only to specify packages, classes with their properties and operation signatures and the associations between these classes, the detailed specification of the operation implementations is still missing and needs to be provided in the language to which the UML classes are mapped.

Those approaches where refinement is desired or required can further be divided into two categories: those that require intrusive refinement of generated artifacts and those where refinements can non-intrusively be specified. Particularly the former pose some challenges to the tools involved but cannot always be avoided. Consider a refinement option in a descriptor file represented in XML. If the descriptor format does not allow the descriptor to be split across multiple files, intrusive refinement is the only option possible.

Most code generator frameworks provide reasonable support for such scenarios, e.g., by so-called user code areas or protected areas/regions. Trouble is brewing when refactorings happen in the model. The sections refined manually in the model transformation results need to be refactored as well. But the modeling tool usually has no knowledge about them. None of the aforementioned generator frameworks is capable of refactoring the refinements after model changes, such as calls to generated operations which were renamed in the model. The user ends up with less refactoring support than in a modern 3GL IDE where many powerful refactoring operations can be performed at ease.

The fundamental problem may best be exemplified with an code generator that only generates operation skeletons where developers have to fill in implementations manually, e.g., in protected areas. The hand-written code will typically have to make reference to other code elements generated from other model elements, such as data types, operations or attributes. While the generator will maintain the developer’s refinements upon re-generation, it will have a hard time adjusting the refinements to other refactorings that happened in the model. For example, if the developer calls another generated operation from within the refinement and the called operation gets renamed in the model, after re-generation the refined code will no longer compile.

This general issue also exists for model-to-model transformations and is difficult to fix. Only fine-grained model change logs and powerful refactoring frameworks can help but in this combination are usually not available.

A special case of such refactorings is the removal of a model element for which a refinement exists. It is not uncommon for code generator frameworks to delete the manual refinements upon mapping the model element deletion into the target environment. Good, if the user applied thorough version control mechanism should he/she later find out that some of the code in the refinement is still needed.

For model-to-model transformations, handling manual refinements in the target model is particularly complex. The difficulties start when the transformation writer needs to think about which refinements to permit and how to specify this in the transformation rules. Protected areas are not as easy to define for model element graphs as they can be specified for a sequence of ASCII characters. The OMG’s QVT standard (in particular its Core part) is trying to address this by letting the transformation writer define patterns for the target model which may also match after the user applied changes manually. The transformation can provide default values for those areas where users may later refine the transformation output. Broad adoption of QVT is as yet uncertain (see also [SwJBHH06]).

When a rigid process is in place for versioning, building and assembling the software, refinements incur another problem. They have to be applied to artifacts typically produced during a build step. However, most processes will not allow the intrusive manipulation of build results. It may not be possible to check in modified build results into the versioning repository. This may make refinements impossible.

Despite the challenges, refinements are frequently used because the target environment’s tools can be used to perform the changes. The more powerful, usable and convenient these are, the more difficult it will be to use marks or annotations instead (see Marks and Annotations).

Model repositories store model elements and the links between them. The links need to identify the elements that they connect. Different repositories and tools use different ways of identifying model elements, with different effects on the robustness and life cycle of the links.

Two fundamentally different approaches to model element identity management exist:

  • universally unique identifier (UUID)-based
  • key attribute-based

While its UUID remains stable across the life time of a model element, some repositories allow the key attributes to change their values. For example, the Web Tools Platform (WTP) built in Eclipse with EMF uses names for identifying model elements. Element references can break if elements change their name and referring elements are not covered by the refactoring. UUID-based references are not affected by such changes and from this perspective work better in a large-scale environment where owners of an artifact do not always know all of the artifact’s users or referrers.

However, UUIDs incur another problem. Section Referencing Results of Model-to-Model Transformations has already pointed out the difficulty of keeping UUIDs stable for outputs of model-to-model transformations. Beyond that, UUIDs can get “lost” if elements are accidentally deleted. It then depends on the tools and the capabilities of the repository how this case gets handled and what this means to references pointing to the UUID which now has disappeared.

A good repository needs to be able to store the broken reference because users may be able to reconstruct the element with the respective UUID, e.g., by fetching a previous version from the versioning repository. Rational Rose always did a great job at this: it marked the missing elements in the diagrams, kept the broken reference and waited for the user to come up with a version of the element missing. It would then resolve the reference again, and the model was healed. Several current tool and repository implementations still do not support this as usably and robustly.

It is also important that for UUID-based references the tools’ undo/redo functionality restores a deleted element under its original UUID.

Summarizing, one can say that the advantages that UUID-based links unfold in a large-scale setup come at a price that currently not all repository and tool implementations are willing to pay. Caution is therefore required in selecting the right infrastructure components for an enterprise modeling setup.

Model repositories store model elements and the links between them. The links need to identify the elements that they connect. Different repositories and tools use different ways of identifying model elements, with different effects on the robustness and life cycle of the links.

Two fundamentally different approaches to model element identity management exist:

  • universally unique identifier (UUID)-based
  • key attribute-based

While its UUID remains stable across the life time of a model element, some repositories allow the key attributes to change their values. For example, the Web Tools Platform (WTP) built in Eclipse with EMF uses names for identifying model elements. Element references can break if elements change their name and referring elements are not covered by the refactoring. UUID-based references are not affected by such changes and from this perspective work better in a large-scale environment where owners of an artifact do not always know all of the artifact’s users or referrers.

However, UUIDs incur another problem. Section Referencing Results of Model-to-Model Transformations has already pointed out the difficulty of keeping UUIDs stable for outputs of model-to-model transformations. Beyond that, UUIDs can get “lost” if elements are accidentally deleted. It then depends on the tools and the capabilities of the repository how this case gets handled and what this means to references pointing to the UUID which now has disappeared.

A good repository needs to be able to store the broken reference because users may be able to reconstruct the element with the respective UUID, e.g., by fetching a previous version from the versioning repository. Rational Rose always did a great job at this: it marked the missing elements in the diagrams, kept the broken reference and waited for the user to come up with a version of the element missing. It would then resolve the reference again, and the model was healed. Several current tool and repository implementations still do not support this as usably and robustly.

It is also important that for UUID-based references the tools’ undo/redo functionality restores a deleted element under its original UUID.

Summarizing, one can say that the advantages that UUID-based links unfold in a large-scale setup come at a price that currently not all repository and tool implementations are willing to pay. Caution is therefore required in selecting the right infrastructure components for an enterprise modeling setup.

Summarizing, one can say that the advantages that UUID-based links unfold in a large-scale setup come at a price that currently not all repository and tool implementations are willing to pay. Caution is therefore required in selecting the right infrastructure components for an enterprise modeling setup.

Model repositories store model elements and the links between them. The links need to identify the elements that they connect. Different repositories and tools use different ways of identifying model elements, with different effects on the robustness and life cycle of the links.

Two fundamentally different approaches to model element identity management exist:

  • universally unique identifier (UUID)-based
  • key attribute-based

While its UUID remains stable across the life time of a model element, some repositories allow the key attributes to change their values. For example, the Web Tools Platform (WTP) built in Eclipse with EMF uses names for identifying model elements. Element references can break if elements change their name and referring elements are not covered by the refactoring. UUID-based references are not affected by such changes and from this perspective work better in a large-scale environment where owners of an artifact do not always know all of the artifact’s users or referrers.

However, UUIDs incur another problem. Section Referencing Results of Model-to-Model Transformations has already pointed out the difficulty of keeping UUIDs stable for outputs of model-to-model transformations. Beyond that, UUIDs can get “lost” if elements are accidentally deleted. It then depends on the tools and the capabilities of the repository how this case gets handled and what this means to references pointing to the UUID which now has disappeared.

A good repository needs to be able to store the broken reference because users may be able to reconstruct the element with the respective UUID, e.g., by fetching a previous version from the versioning repository. Rational Rose always did a great job at this: it marked the missing elements in the diagrams, kept the broken reference and waited for the user to come up with a version of the element missing. It would then resolve the reference again, and the model was healed. Several current tool and repository implementations still do not support this as usably and robustly.

It is also important that for UUID-based references the tools’ undo/redo functionality restores a deleted element under its original UUID.

Text syntaxes are one valid way to view and edit models, in particular one that many developers prefer over graphical editors. An expression in the Object Constraint Language (OCL) is an example of a set of model elements for which a textual syntax exists as the dominant form of viewing and editing such expressions. Other examples are syntaxes derived using the Human-Usable Text Notation (HUTN, [HUTN]). Smalltalk systems followed a similar paradigm. The development artifacts were objects maintained in the image, and the IDE was ultimately just manipulating these objects. This gives some great browsing and refactoring capabilities.

IBM tried to apply this paradigm to Java in their VisualAge toolset, based on the Envy repository, but issues around having to import/export the Java sources to use external tools operating on the source code proved to be stumbling blocks.

The boundaries between editing program text as ASCII files stored as such in the file system and editing a text view of a model repository are starting to blur. Eclipse’s Java Development Tools (JDT) provide excellent refactoring and navigation capabilities although for developers it seems that the sources are still stored as ASCII files in their typical folder structure. JDT manages this by maintaining all kinds of indices in the background. This approach comes very close to using a model repository with name-based identity and references (see also Handling Inconsistencies) and in addition preserves all benefits of regular text editors, such as keeping all lexical information the user entered (such as indentation and comments), the ability to save inconsistent or unparsable texts, or copying and pasting arbitrary sections of text and not necessarily only valid subtrees of the concrete syntax tree. A similar approach is pursued by TEF [Sch07] and openArchitectureWare’s Xtext [Xtext]..

Problems occur when trying to put a parser-based approach on top of a UUID-based repository. The parser cannot easily identify changes, particularly if elements changed their names in the text. Elements with new UUIDs may get created, and old references will therefore break.

At the far other end of the spectrum are approaches like that of Intentional Software [Sim07]. There, the tools can combine a variety of different syntaxes, among them text-based ones, even in a single editor. Modifications are applied directly to the underlying model and affect all other views immediately. Intelligent parsing technology ensures that syntactically incorrect stretches of text can still be saved and that editing the model feels like editing a text document. With such an approach it becomes possible to combine text-based syntaxes with repositories that use UUIDs.

However, these capabilities are not yet widely available, and in the tools market, customers are reluctant to get locked into proprietary solutions. As a result, the powerful combination of graphical and forms-based syntaxes with text views has not reached widespread adoption at the time of writing.

When a rigid process is in place for versioning, building and assembling the software, refinements incur another problem. They have to be applied to artifacts typically produced during a build step. However, most processes will not allow the intrusive manipulation of build results. It may not be possible to check in modified build results into the versioning repository. This may make refinements impossible.

Despite the challenges, refinements are frequently used because the target environment’s tools can be used to perform the changes. The more powerful, usable and convenient these are, the more difficult it will be to use marks or annotations instead (see Marks and Annotations).

Model repositories store model elements and the links between them. The links need to identify the elements that they connect. Different repositories and tools use different ways of identifying model elements, with different effects on the robustness and life cycle of the links.

Two fundamentally different approaches to model element identity management exist:

  • universally unique identifier (UUID)-based
  • key attribute-based

While its UUID remains stable across the life time of a model element, some repositories allow the key attributes to change their values. For example, the Web Tools Platform (WTP) built in Eclipse with EMF uses names for identifying model elements. Element references can break if elements change their name and referring elements are not covered by the refactoring. UUID-based references are not affected by such changes and from this perspective work better in a large-scale environment where owners of an artifact do not always know all of the artifact’s users or referrers.

However, UUIDs incur another problem. Section Referencing Results of Model-to-Model Transformations has already pointed out the difficulty of keeping UUIDs stable for outputs of model-to-model transformations. Beyond that, UUIDs can get “lost” if elements are accidentally deleted. It then depends on the tools and the capabilities of the repository how this case gets handled and what this means to references pointing to the UUID which now has disappeared.

A good repository needs to be able to store the broken reference because users may be able to reconstruct the element with the respective UUID, e.g., by fetching a previous version from the versioning repository. Rational Rose always did a great job at this: it marked the missing elements in the diagrams, kept the broken reference and waited for the user to come up with a version of the element missing. It would then resolve the reference again, and the model was healed. Several current tool and repository implementations still do not support this as usably and robustly.

It is also important that for UUID-based references the tools’ undo/redo functionality restores a deleted element under its original UUID.

Summarizing, one can say that the advantages that UUID-based links unfold in a large-scale setup come at a price that currently not all repository and tool implementations are willing to pay. Caution is therefore required in selecting the right infrastructure components for an enterprise modeling setup.

Model repositories store model elements and the links between them. The links need to identify the elements that they connect. Different repositories and tools use different ways of identifying model elements, with different effects on the robustness and life cycle of the links.

Two fundamentally different approaches to model element identity management exist:

  • universally unique identifier (UUID)-based
  • key attribute-based

While its UUID remains stable across the life time of a model element, some repositories allow the key attributes to change their values. For example, the Web Tools Platform (WTP) built in Eclipse with EMF uses names for identifying model elements. Element references can break if elements change their name and referring elements are not covered by the refactoring. UUID-based references are not affected by such changes and from this perspective work better in a large-scale environment where owners of an artifact do not always know all of the artifact’s users or referrers.

However, UUIDs incur another problem. Section Referencing Results of Model-to-Model Transformations has already pointed out the difficulty of keeping UUIDs stable for outputs of model-to-model transformations. Beyond that, UUIDs can get “lost” if elements are accidentally deleted. It then depends on the tools and the capabilities of the repository how this case gets handled and what this means to references pointing to the UUID which now has disappeared.

A good repository needs to be able to store the broken reference because users may be able to reconstruct the element with the respective UUID, e.g., by fetching a previous version from the versioning repository. Rational Rose always did a great job at this: it marked the missing elements in the diagrams, kept the broken reference and waited for the user to come up with a version of the element missing. It would then resolve the reference again, and the model was healed. Several current tool and repository implementations still do not support this as usably and robustly.

It is also important that for UUID-based references the tools’ undo/redo functionality restores a deleted element under its original UUID.

Summarizing, one can say that the advantages that UUID-based links unfold in a large-scale setup come at a price that currently not all repository and tool implementations are willing to pay. Caution is therefore required in selecting the right infrastructure components for an enterprise modeling setup.

Summarizing, one can say that the advantages that UUID-based links unfold in a large-scale setup come at a price that currently not all repository and tool implementations are willing to pay. Caution is therefore required in selecting the right infrastructure components for an enterprise modeling setup.

Model repositories store model elements and the links between them. The links need to identify the elements that they connect. Different repositories and tools use different ways of identifying model elements, with different effects on the robustness and life cycle of the links.

Two fundamentally different approaches to model element identity management exist:

  • universally unique identifier (UUID)-based
  • key attribute-based

While its UUID remains stable across the life time of a model element, some repositories allow the key attributes to change their values. For example, the Web Tools Platform (WTP) built in Eclipse with EMF uses names for identifying model elements. Element references can break if elements change their name and referring elements are not covered by the refactoring. UUID-based references are not affected by such changes and from this perspective work better in a large-scale environment where owners of an artifact do not always know all of the artifact’s users or referrers.

However, UUIDs incur another problem. Section Referencing Results of Model-to-Model Transformations has already pointed out the difficulty of keeping UUIDs stable for outputs of model-to-model transformations. Beyond that, UUIDs can get “lost” if elements are accidentally deleted. It then depends on the tools and the capabilities of the repository how this case gets handled and what this means to references pointing to the UUID which now has disappeared.

A good repository needs to be able to store the broken reference because users may be able to reconstruct the element with the respective UUID, e.g., by fetching a previous version from the versioning repository. Rational Rose always did a great job at this: it marked the missing elements in the diagrams, kept the broken reference and waited for the user to come up with a version of the element missing. It would then resolve the reference again, and the model was healed. Several current tool and repository implementations still do not support this as usably and robustly.

It is also important that for UUID-based references the tools’ undo/redo functionality restores a deleted element under its original UUID.

Text syntaxes are one valid way to view and edit models, in particular one that many developers prefer over graphical editors. An expression in the Object Constraint Language (OCL) is an example of a set of model elements for which a textual syntax exists as the dominant form of viewing and editing such expressions. Other examples are syntaxes derived using the Human-Usable Text Notation (HUTN, [HUTN]). Smalltalk systems followed a similar paradigm. The development artifacts were objects maintained in the image, and the IDE was ultimately just manipulating these objects. This gives some great browsing and refactoring capabilities.

IBM tried to apply this paradigm to Java in their VisualAge toolset, based on the Envy repository, but issues around having to import/export the Java sources to use external tools operating on the source code proved to be stumbling blocks.

The boundaries between editing program text as ASCII files stored as such in the file system and editing a text view of a model repository are starting to blur. Eclipse’s Java Development Tools (JDT) provide excellent refactoring and navigation capabilities although for developers it seems that the sources are still stored as ASCII files in their typical folder structure. JDT manages this by maintaining all kinds of indices in the background. This approach comes very close to using a model repository with name-based identity and references (see also Handling Inconsistencies) and in addition preserves all benefits of regular text editors, such as keeping all lexical information the user entered (such as indentation and comments), the ability to save inconsistent or unparsable texts, or copying and pasting arbitrary sections of text and not necessarily only valid subtrees of the concrete syntax tree. A similar approach is pursued by TEF [Sch07] and openArchitectureWare’s Xtext [Xtext]..

Problems occur when trying to put a parser-based approach on top of a UUID-based repository. The parser cannot easily identify changes, particularly if elements changed their names in the text. Elements with new UUIDs may get created, and old references will therefore break.

At the far other end of the spectrum are approaches like that of Intentional Software [Sim07]. There, the tools can combine a variety of different syntaxes, among them text-based ones, even in a single editor. Modifications are applied directly to the underlying model and affect all other views immediately. Intelligent parsing technology ensures that syntactically incorrect stretches of text can still be saved and that editing the model feels like editing a text document. With such an approach it becomes possible to combine text-based syntaxes with repositories that use UUIDs.

However, these capabilities are not yet widely available, and in the tools market, customers are reluctant to get locked into proprietary solutions. As a result, the powerful combination of graphical and forms-based syntaxes with text views has not reached widespread adoption at the time of writing.

 

 

 

 

分享按钮发布于: 2007-09-23 01:52 CharlieShen 阅读(53) 评论(0)  编辑 收藏