Disclaimer: This documentation has been extracted from the Team for Capella Guide which available in the Team for Capella client from the menu Help > Help Contents. Some links referencing topics from Capella, Sirius or other component with documentation in the embedded help will not work.
Team for Capella is an add-on that allows users to collaborate on remotely shared models and representations. For this collaboration between users to operate smoothly, Team for Capella relies on the following features:
|
Relying on a SCM tool to manage concurrent accesses is possible, but clearly limited. This main reason is that the needs for managing model versions (genuine objective of a SCM tool) and concurrent accesses are deeply different:
Here, fragments are created to manage concurrent accesses and not anymore because their content has to be versioned. The global idea of Team for Capella Solution is to separate the management of both needs:
|
|
Team for Capella Solution: 3 products.
|
The release note is updated for each new version and contains descriptions on changes visible by users, new or modified APIs accessible for developers. The change log can also be found online: Team for Capella Change Log
Compatibility with Capella 6.1.0
Compatibility with Capella 6.0.0
-archiveCdoExportResult
argument has been added in order to zip (or not) the XML file resulting from the the cdo export command launched by the importer in intermediate step. When the XML file is zipped, the zip is created into the "output folder" (see arguments of the T4C importer) and the original XML file is then deleted. The default value is true. -stopRepositoryOnFailure
argument has been added in order to stop the repository when import/export is on failure. This parameter could not be set to true if -closeServerOnFailure
argument is already set to true.com.thalesgroup.mde.melody.collab.importer.api.TeamImporterConstants
used especially for telnet command have been reported in a new class com.thalesgroup.mde.melody.collab.importer.api.TeamServerConnectionsConstants
in order to share arguments between importer and Exporter application.Compatibility with Capella 5.2.0
Compatibility with Capella 5.1.0
As experimental features:
Compatibility with Capella 5.0.0
Please also refer to Sirius Release Notes, Capella Release Notes and Sirius Collaborative Mode Release Notes
com.thalesgroup.mde.melody.collab.maintenance
and can be launched from the Scheduler's dedicated jobs. Refer to
Server Administration / Administration tools section of the documentation for more details.
fr.obeo.dsl.viewpoint.collab.server.warmup
plugin has been added on the server, it provides an org.eclipse.emf.cdo.spi.server.IAppExtension
which reacts to repository start-up and loads all found resources which are direct children of the projects folder (.representation folder and .srm representation resources are excluded). This initializes the revision manager caches at repository start-up and speeds up the session opening of the first connection to each project. This behavior can be disabled with the system property -Dfr.obeo.dsl.viewpoint.collab.server.enabledWarmup=false
.Team for Capella 1.4.0 introduces partial support for internationalization: all literal strings from the runtime part of the Team for Capella add-on are now externalized and can be localized by third parties by providing the appropriate "language packs" as OSGi fragments. Note that this does not concern the server components, the user profile component, the maintenance and importer applications, the administration components or the parts of the UI inherited from Eclipse/EMF/GEF/GMF/Sirius/CDO and other libraries and frameworks used by Team for Capella.
Some API changes were required to enable this. Most breaking changes concern the plug-in/activator classes from each bundle. They are:
com.thalesgroup.mde.melody.collab.license.registration.TeamForCapellaLicenseRegistrationPlugin
, a subclass of org.eclipse.emf.common.EMFPlugin
has been added. The corresponding OSGi bundle activator is the internal class TeamForCapellaLicenseRegistrationPlugin.Implementation
.Additional non-breaking changes:
plugin.properties
or messages.properties
file depending on their initialization with org.eclipse.sirius.ext.base.I18N
or inheritance to org.eclipse.osgi.util.NLS
. These (translated) messages are available at runtime as static fields of Messages
classes, added to all concerned bundles (always in the same package as their plug-in/activator class). The concerned bundles are:
com.thalesgroup.mde.melody.collab.ui
com.thalesgroup.mde.melody.collab.license.registration
Messages
classes have been completed with additional translation keys (and default values). Mutliple Messages
from the same plugins have been merged into a single class per plugin. The concerned bundles are:
com.thalesgroup.mde.cdo.emf.transaction
com.thalesgroup.mde.melody.collab.ui.airdfragment
plugin.xml
have been have been extracted with default values in the corresponding plugin.properties
files.com.thalesgroup.mde.melody.collab.importer.api.TeamImporterConstants.CDO_EXPORT
has been added to launch the cdo export
command and use this file as base to execute the repository import. This parameter should be used with XML_IMPORT_FILE_PATH to determine where the cdo export
file should be saved.-XMLImportFilePath
argument has been added to allow to use the importer from a file produced by a cdo export
command from the CDO server. In that case, the importer will not connect to the current cdo server but will perform the import from a virtual cdo server based on the XML export. The expected argument is the file path to the cdo export result.-cdoExport
argument has been added to make it possible to automatically perform the cdo export
command and use the resulting XML file as described in -XMLImportFilePath
above documentation. The default value is false. The -XMLImportFilePath
argument is mandatory since the same file path is used to perform the XML import.fr.obeo.dsl.viewpoint.collab.importer
plugin and a Team for Capella specific part. The code has been refactored and dispatched in the proper plugins. The previous version of the com.thalesgroup.mde.melody.collab.importer
plugin did not declare any classes as explicit API, com.thalesgroup.mde.melody.collab.importer.api.TeamImporterConstants
and com.thalesgroup.mde.melody.collab.importer.api.TeamImporterCDOExporter
have been promoted to API classes.fr.obeo.dsl.viewpoint.collab.importer.api.ImporterConstants
and are inherited by com.thalesgroup.mde.melody.collab.importer.api.TeamImporterConstants
: Arguments | Description |
---|---|
-exportCommitHistory | Whether the Commit History metadata should be exported (default: true). If the value is false, all other options about the commit history will be ignored. |
-includeCommitHistoryChanges | imports the commit history detailed changes for each commit (default: false). This option is applied for all kinds of export of the commit history(xmi, text or json files). |
-importCommitHistoryAsJson | import commit history in a json file format. The file has the same path as the commit history model file, but with json as extension. |
-overrideExistingProject | if the output folder already contains a project with the same name this argument allows to remove this existing project. |
-logFolder | defines the folder where to save logs (default : -outputFolder). Note that this folder needs to exist. |
-archiveProject | defines if the project should be zipped (default : true). Each project will be zipped in a separate archived suffixed with the date. |
-outputFolder | defines the folder where to import projects (default : workspace). Note that this folder needs to exist. |
The properties page (contextual action) on aird files of shared modeling project has a tab named Repository Information. This presents the connected repository information (location, port and name) as well as a list of connected users on the same repository.
Please also refer to Sirius Release Notes, Capella Release Notes and Sirius Collaborative Mode Release Notes
A new mode allowing lazy loading of representations is activated for shared modeling projects. It translates into much faster project opening because none of the representation data are loaded. The data of a representation are loaded only when the application requires it. Examples: open representation, copy representation, export representation as image etc... Warning: Passing from one mode to the other requires to clean the database. Indeed, the lazy loading of representations is linked to the fact that the representations are split in many resources in the database. Nevertheless, the application will work properly with a mix of split or non split representations.
Technically, the lazy loading of representations is activated with the preference CDOSiriusPreferenceKeys.PREF_CREATE_SHARED_REP_IN_SEPARATE_RESOURCE
set to true by Team for Capella. It can be disabled with the use of a system property: -Dcom.thalesgroup.mde.cdo.emf.transaction.enableRepresentationLazyLoading=false
.
The representation content is stored in a dedicated srm shared resource. Note that representations in local Capella projects are still stored in the aird resource.
uid is a new attribute on Sirius elements that are serialized in aird (and srm) resources. It is used as technical id for any element from the Sirius model which are stored in the aird (or srm) resources except for GMF notation elements. The old xmiids shared resource is no more used. Its role was to ensure that the xmi:id of elements were kept after export/import on the Team for Capella server.
com.thalesgroup.mde.cdo.emf.transaction.AirdCDOResourceImpl
was used for aird resource. It has been deleted and replaced by fr.obeo.dsl.viewpoint.collab.internal.remoteresource.CachedObjectCDOResourceImpl
com.thalesgroup.mde.melody.team.xmisupport
plugin has been removed as it is not useful anymore.The limitation that came out in Team for Capella 1.2.x is no more effective. While comparing a local project to a connected project or between two connected projects, no differences will be shown between representations if they are identical.
Please have a look at Capella Model Diff/Merge Documentation for more details.
The Audit mode is now active by default in the Team for Capella server. This mode aims to keep tracks of all versions of each object in the server database. It is required for comparing different versions of the model for example.
Please have a look at Audit mode for more details.
User profile resource permission now can use a regular expression with spaces. If you used the %20 encoding to bypass this problem, then you must replace it by a standard space to make it work with the new version.
The Commit History View has been improved to display a commits list related to the selection and also displaying the impacted elements of one or several selected commits. See the Commit View section in the user documentation of Sirius Collaborative Mode for more details about those changes: Commit History View.
The commit description dialog box is displayed if there is a warning associated to the commit description. A warning occurs when:
Please have a look at Change Management for more details.
Uid can be used as technical id for representations in case when the XmiId synchronization is disabled.
Please have a look at Capella release note for more details about the usage of uid and the migration of models from previous versions to update uids.
Because of the abandonment of using XmiID as the identification for representations and their elements while performing a Diff/Merge operation between 2 Capella projects, the graphical internal elements between two representations are technically not possible to be matched. It causes an impact while comparing and merging 2 projects in Team environment:
Please have a look at Capella Model Diff/Merge Documentation for more details.
This XmiidsResource creation during export and its synchronization mechanism are now disabled by default. The system property "-Dcom.thalesgroup.mde.cdo.emf.transaction.disableXmiidsSynchronization=false" allows to re-enable it if needed.
Please have a look at VM Arguments > Disable XmiId synchronization for more details.
The durable locking mechanism is now disabled by default.
Please have a look at Durable locks management view for more details.
Please have a look at Release note for Sirius Collaborative Mode for more details.
Team for Capella is now based on CDO 4.6 (previous versions used CDO 4.4).
Please have a look at the Capella release notes.
Please have a look at the Capella release notes.
Please have a look at the Capella release notes.
Please have a look at the Capella release notes.
Please have a look at the Capella release notes.
The default strategy for CDO generation concerning Capella meta-model has been changed from reflective feature delegation to dynamic feature delegation.
Please have a look at the Capella release notes.
Team for Capella provides to its users additional functionalities on Capella projects allowing to collaborate easily thanks to:
Import a file-based model in a workspace. The model can be indifferently fragmented or not.
On the Capella Project containing the model, use the contextual menu to launch the Export wizard.
Choose " Capella Project to Remote Repository"
The "Export model to repository" wizard opens. The repository information is initialized with the default settings defined in the Preferences.
Before continuing, the server information have to be verified. To do so, click on " Test connection"
A login dialog pops up. Enter valid login and password (see Server Administration for more information about User management).
If the identification is successful, the " Finish" button becomes active.
If you do not click on " Finish" but on " Next", the following options are available:
If you click " Next" again, you will be able to choose the images you want to export to the repository in this new wizard page.
Refer to Export images to the server when exporting the project for more details.
Then, after having clicked Finish, a progress bar is displayed.
When the export is completed, a dialog shows the result of the process by listing the newly created or overridden resources, as well as the not found resources, already existing resources, or the non-discovered resources.
Note that the "discover" mode is not yet implemented, but this dialog allows to inform the user about what has been done during the export.
In the Capella Project Explorer, use the contextual menu to launch the Import wizard.
Choose " Capella Project from Remote Repository"
A wizard opens. The repository information is initialized with the settings defined in the Preferences. These information can be overridden. Before continuing, the server information have to be verified. To do so, click on " Test connection". Follow the login instructions as when login to Export the model. When the test is successful, the " Next" button becomes active.
A second Wizard page proposes to chose the model to Import (a Shared Repository can hold several models).
Optionally change the name of the Capella Project going to be created.
The behavior of the wizard can be configured with the following options:
If you click on Next you will be able to choose options about which images will be imported.
Refer to Import images from the server when importing the project for more details.
|
Images that already exist on the workspace will be overridden automatically. |
A progress bar appears.
When the import is completed, a dialog shows the result of the process by listing the newly created or overridden resources, as well as the not found resources, already existing resources, or the non discovered resources.
Note that the "discover" mode is not yet implemented, but this dialog allows to inform the user about what has been done during the import.
Once the import is finished, the imported model is automatically opened.
The model files can then be pushed back to Git if necessary.
This command will dump the connected project into a new local Capella project. The local project will contain only the already loaded representations.
It is available in contextual menu on aird file of an opened connected project.
This command is useful if you encounter a Save fail issue. You can then use the tool to have a new Capella project, compare it with the project on server and make some merge.
Connecting to a remote model is similar to opening a file-based model. The result of a connection is an opened model ready to be modified.
Using the contextual menu on the Capella Project Explorer, click on New / Capella Connected Project
A dialog pops up, asking to specify the information of the remote repository holding the model. By default, these fields are initialized with the values set in the Preferences.
At this stage, the server information have to be verified. To do so, click on " Test connection".
A login dialog pops up. Enter valid login and password (see Server Administration for more information about User management).
|
By checking "Remember me", you have the option to store your user name and password in the Eclipse’s Secure Storage. If you do so, your user name and password will not be asked for future connections. |
Once the connection is verified, click on " Next". Select one of the model hold in the repository.
The connection will create a new Capella project to hold the local proxy for the remote model. A suffix like ".team" is added by default at the end of the project name, in order to distinguish local and shared projects at the first glance.
Click on " Finish". According to the size of the model, the duration of the connection may vary.
Warning: it is longer than opening a file-based version of the same model.
The connection can fail, for example if a Viewpoint used by the remote model is missing on client side. In this specific case, the following error will be displayed:
Known issue: if this error occurs, it is advised to restart Capella before trying to reconnect (even if you want to connect to another model for which there are no missing Viewpoints).
If the connection is successful, the model is opened in the Capella Project Explorer. Note there is no semantic file ".capella". The ".aird" file contains both information about the remote model and the local diagrams on this model.
At the end of a working session, the model can be closed exactly like file-based model.
When a connected project already exists, connecting again simply requires a double click on the ".aird" file. If necessary, the login dialog will be displayed.
Both "Automatic Refresh" and "Do refresh at representation opening" can be specified for a given aird. Refer to Sirius documentation: Preference associated to the aird file
For any new local Capella project, the preferences are not overridden for the aird file and the preference values are those displayed in Windows/Preferences/Sirius
For a connected project, to define specific Refresh preferences, a page has been added in the "Capella Connected Project" wizard to allow users to override refresh preferences for the being created connected project local aird. By default, "Enable project specific settings" is checked and both "Automatic Refresh" and "Do refresh at representation opening" preferences are set to false.
It is nevertheless possible to change the default value using the preference fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_PROJECT_SPECIFIC_SETTINGS_DEFAULT_VALUE. If set to false, then, by default, "Enable project specific settings" is unchecked.
Note: The preference values are not shared between two connected users. The preferences are associated to the local aird of the "Connected project" but not with the shared aird.
When "Remember me" is used, the login/password couple is stored in an encrypted file (located here: %USERPROFILE%\.eclipse\org.eclipse.equinox.security\secure_storage).
The key used to encrypt this file is generated and depends on the computer, the current Windows account and the Team for Capella architecture (32 bits or 64 bits).
So by default, this file can only be decrypted and used using the same computer/windows account/Team for Capella architecture (32 bits or 64 bits) than those used to create the file.
Because of this, it is not possible to use the Secure Storage feature with roaming user profiles.
Example: if the file was created using "Computer1"/User Account/Team for Capella 32 bits, it won’t be possible to reuse the Secure Storage with "Computer2" or with another user account or with Team for Capella 64 bits.
In the cases described above, the following error will appear in the "Error Log":
A workaround for this problem is to provide, by configuration, the key to use to encrypt the Secure Storage file. To do that:
In the following cases, it could be useful to clear the Secure Storage:
To clear the Secure Storage:
Note: It is not possible to just reset a stored username and/or password for a single repository. By performing these actions, the entire password store will be deleted and you will then have to re-enter your username and password for each repository, the first time you wish to use it.
The purpose of this functionality is to be able to connect to airdfragments in order to work with the whole semantic model but only a subset of representations (diagrams or tables).
It can be useful when working with a big model to shorten connection time and memory consumption.
The model to prepare must be a local model in file format (do an import if necessary). The session must be open.
2 actions can be used:
It must be added in the project (in the project root or in a directory of the project, "fragments" for example).
Model organization after an execution of this action:
|
|
- The .airdfragment file path must not contain spaces.
- The project containing the airdfragments must not host many semantic models. (only one semantic model is allowed)
When the model is well organized, export it to the server.
You can create connection projects to several .airdfragments thanks to the dedicated wizard:
The second page of the connection wizard allows selecting .airfragments to use.
|
Connection to fragments belonging to different models is not allowed since it does not make sense. |
Connections to fragments example:
As previously, it is still possible to connect to the .aird, all diagrams will be accessible.
It can be needed to move diagrams between aird and airdfragments and between 2 airdfragments.
This can be done on a local model or on a remote model (the source and destination resources must be visible from the same connection project).
To move a diagram to another resource, use the "Move Diagrams" sub menu:
In addition, to ease diagrams management, the "Representations per resource" item can be useful. To display it, uncheck it in the "Customize View…" dialog.
airdfragments can only be managed in a local model (do an import if needed).
Do not use directly the Eclipse delete command, all content would be lost.
Several users access the model held by the Team for Capella Server repository through their Team for Capella Client. The Capella project on the client side only consists in one ".aird" file which is both a proxy towards the shared repository and a container for the local diagrams.
|
Fundamental principles
|
Red locks indicate another user is currently modifying the element (this modification might be a deletion). The identification of the user holding the lock is added between brackets as a suffix.
Green locks indicate the current user has reserved or modified the current element.
Below is an example of the decorations in the Project Explorer.
When an element is locked by another user, its editor dialog is still accessible but cannot be modified (all fields are disabled).
Lock decorations are visible in any View of Capella, such as the Semantic Browser, the selection dialogs or the delete confirmation window.
On diagrams, the semantic locks are represented on the graphical artifacts (containers, nodes, ports, links) representing the locked model elements.
Updates of modified semantic elements are performed automatically.
Two users cannot work simultaneously on the same diagram. As soon as a user modifies a diagram, the whole diagram is locked for the other users.
When creating, cloning or moving a representation,
the associated semantic target element is automatically locked. This is useful to avoid that, on a connected project, the current user saves the newly created representation with a null target in case an other user had deleted the target just before the current user saves.
Note that a warning is displayed in the dialog box to ask the user to save as soon as possible so that to release the lock.
This behavior can be deactivated using the preference
CDOSiriusPreferenceKeys.PREF_LOCK_SEMANTIC_TARGET_AT_REPRESENTATION_LOCATION_CHANGE with a false value.
|
This behavior has a particular impact when using User Profile. If the user has only a read only right on the semantic element, he can not create/clone/move a representation on it. |
The lock diagram decorations are visible both on the tab bar of the diagram editor and in the Project Explorer.
When a diagram is locked by another user:
However, even though another user locks a diagram, semantic elements appearing on this diagram can still be modified by anyone. This is the case for example of the Function "Acquire Images" on the above example. The opposite is true as well: one can have a green lock on a diagram despite some semantic elements appearing on this diagram are locked by other users.
Once the user modifying a diagram saves and commits its modifications, the diagram is not locked anymore. For the other users currently displaying the diagrams, two different alternatives:
After the refresh is performed, the new layout becomes visible.
Note: on the above example, one semantic element ("Acquire Images") was currently being renamed by the user. The consequence is that the refresh induces a new change (and thus a green lock) on the diagram to reflect the label update.
In Capella, the background of diagrams always represents a semantic element (which is the element under which the diagram is located in the Project Explorer). In case this semantic element is locked (hereunder the Root System Function), a specific decorator is put on the background of the diagram. This means for example that even though the diagram is locked for edition (green lock), adding a new element on the background of the diagram is not possible.
Diagrams can be local or shared in the repository. Shared diagrams have specific decorators.
When creating a new diagram, a dialog pops up asking the user to choose whether the diagram should be shared (cdo://) or local (platform:/resource…).
It is possible to move diagrams from the repository to the local project and vice versa.
From the local project to the shared repository.
From the repository to the local project.
Note that there is a warning when the selected target is local.
Important note: semantic elements created on a local diagram are instantaneously shared with other users as soon as a commit is performed. Local diagram does not mean local elements.
It is possible to explicitly lock an (or a set of) element(s) by using the contextual menu.
Note that only semantic elements are locked. Diagrams can also be locked explicitly, but individually.
The behavior of the locks when they are set manually is a bit different than the one of automated locks: while automated locks are systematically released at each commit, elements locked explicitly have to be unlocked explicitly as well.
Consider the following use case
Currently not available.
A Preference allows specifying whether a description is required when committing or not. In case this option is enabled, the following dialog is prompted on each commit action.
Dialog buttons:
Another preference allows the user to pre-fill the commit description using various strategies. The default strategy exploits the previous commit description, while the Mylyn strategy relies on the content of the currently-active, non-completed Mylyn task using the template defined in the Mylyn > Team preferences. Below is an example of such a template:
${task.description}
User Information:
Key: ${task.key} URL: ${task.url}
For more information about these templates, refer to the Mylyn documentation.
A dedicated view allows displaying the commit history. This window can be opened with the contextual menu called on the semantic model.
This view is particularly useful to monitor the current changes on the shared model. The objective of this history is also to attached as a change log when pushing back file-version of the model to Git.
This view is divided in two parts :
The Commit History View contains several buttons to modify the context of the commits list, filter those commits or modify the changes viewer tree layout/content.
In particular, a "Filter" button is present in the Commit History view toolbar and allows the user to filter the content of the impacted elements.
This button is represented by the following icon :
By activating or deactivating this button, the user can apply or not the selected filter.
Selected filters can be customized into the menu icon > Filters...
A new selection dialog is opened. From this dialog, the user can select filters to activate for the Commit History view. Filters provided in this selection dialog are the same than filters available in the Capella Project Explorer.
The properties page (contextual action) on aird files of Capella connected project has a tab named Collaborative Session Details. It presents the repository information (location, port and name) and information about connected users and locked elements for this connected project. For more details, refer to Collaborative Session Details of the Sirius Collaborative Mode user documentation.
The properties page (contextual action) on aird files of local or connected Capella projects has a tab named Sirius Session Details. It provides a lot of usefull information about the project (used viewpoints, information about representations and capella models). For more details, refer to Sirius Session Details of the Sirius user documentation.
Images can be used
To use images in remote models, only images that exist on the repository can be used. Images from the workspace or from a local directory must be uploaded to the server in order to be used in a remote model.
Once the project is exported, it is still possible to manage images on the server with the
Manage Images from Remote Server dialog.
This dialog is available from the contextual menu on a shared aird file or an open connected project.
It is also possible to upload whole sets of images by selecting project, folders or single images from the workspace
The image hierarchy of uploaded images(project and folders) is identical to the selection in the workspace.
An existing image can be overridden on the server. All the diagram elements, in the shared diagram, using the replaced image, will be automatically updated.
On the Export project wizard, you will be able to choose the images you want to export to the repository in this new wizard page.
|
The images used by the exported projects will be automatically exported to the repository to keep the consistency of the shared representations. This means that if you explicitly use an image in one of your projects to export, this image will be exported even if you didn't select it. |
The left panel shows the existing images in the open workspace projects, and the right panel shows the images you have chosen to export from the left panel. The " Override already existing images" checkbox allows you to override existing images on repository that have the same path as those added to the right panel.
Images in
JPEG,
JPG,
PNG and
SVG format are supported.
The maximum size of uploaded images through the export wizard is 10 MB per image. If greater, images are not displayed in the selection UI and cannot be exported to the server. This value can be changed by overriding the preference PREF_MAX_KILOBYTES_IMAGE_SIZE.
|
If the referenced images do not exist when exporting the project to the server, an error appears in the "Error Log" listing all missing images.
Open the error details to see all affected images:
|
|
If an image that has been exported to the server is afterwards not used anymore in a remote diagram, then this image will not be imported when importing the project if you choose the Import only used images option in the import wizard. |
When a model is exported to the Team for Capella Server, referenced images which are available in the workspace will be exported along with the model. In the local project, it is important to select images in the right project because it will drive the way the image is recreated when importing the project locally (after it has been exported to the server).
Local project where images, image1 and imageLib1, have been used as workspaceImage before exporting: |
|
Projects after exporting then importing the remote project:
|
|
Importing images is done when importing a remote project in the workspace using the Team for Capella import wizard.
When importing the remote project locally, the imported images will be created in local projects that correspond to their location on the server.
The import wizard allows you to choose from 3 different options for importing images:
|
Images that already exist on the workspace will be overridden automatically. |
Starting from a local project, all images in the workspace have been exported to the server with the project.
Suppose that
/ImageLibrary/imageLib1.png is referenced by the project, and
/In-Flight Entertainment System/image1.png has been exported because explicitly chosen in the export wizard page.
Let's consider that the local workspace is then completely cleaned up to import the remote projects.
The result of the import will be different according to the selected option:
Import all images |
|
Import only used images |
|
Do not import images |
|
|
|
|
What to retain in few words:
|
To use images in remote models, only images that exist on the repository can be used. Images from the workspace or from a local directory must be uploaded to the server in order to be used in a remote model.
In a diagram it is possible to associate an image to a node using "Set style to workspace image"
Select the project or folder where your image is located and select it in the image gallery:
From this dialog it is also possible to manage remote images. Refer to "Manage images on remote repository" documentation
It is possible to add a description with images, for any element of a Capella project, using the description tab in the Properties view.
Like in remote models, only images that exist on the repository can be used. There are two ways to add an image in the description
To add an image with the selection dialog, click on
Add image button
and choose the image.
Images are then added to the description:
One classical pitfall is to export models (libraries and projects) that are linked by "reference" relationship one by one. Rather, export of linked models must be done at the same time because doing it one by one may lead to the export of still exported models. For the sake of illustration, having two projects P1 and P2 referencing library L1 may lead to one re-export of L1 if one tries to export P2 after having exported P1. The following section describes the correct procedure.
We assume in this section that a Team for Capella Client is opened and its workspace contains a set of models (projects and libraries) that are interconnected by the way of reference links.
In that context, the export procedure is as follow:
Figure below illustrates the four steps described above in the given context:
Libraries can be accessed as classic remote projects with Team for Capella and have almost the same behavior as with Capella standalone:
It is allowed to open, in the same client, a project and some libraries it references. Thus it is possible to have 2 views (or more) of the same semantic elements:
If a library is referenced with a "readAndWrite" access policy, it is allowed to change its semantic model from the project connection, from P1.team in this example:
Even if the user is logged with the same login to L1 and to P1, if a change is done on one side, there will be a green lock on this side and a red lock on the other (so concurrent changes are forbidden on library’s elements).
Team Preferences are available in Window / Preferences / Sirius, section Team Collaboration.
The Registered Repositories section contains all saved server information. There is a default saved repository that can be overridden only in this preference page. Registered repositories can be edited, duplicated or removed and new repository configurations can be added. All these configurations can be retrieved in the Connection / Import / Export wizards.
The check box " Require description for commit actions" specifies whether a dialog allowing to input a description when committing should be displayed systematically or not.
By activating the preference " Pre-fill commit description", any time the user is asked for entering a commit description, the framework will compute one using a list of registered participants (see description below). This description will be presented to the user so he can modify it or simply reuse it for its current commit.
By activating the preference " Automatically use the pre-filled description when none is provided", any time the user commits and do not specifically provides a commit description, the description computed from the mechanism described above will be used.
Please check the following settings in the other sections of the Preferences.
For a better reactiveness of the whole workbench, the synchronization of the Semantic Browser should be disabled. Reminder: when the Semantic Browser is not permanently synchronized, typing F9 focuses the Semantic Browser on the currently selected element.
"Automatic refresh" and "Do refresh on representation opening" are activated by default as it is in Capella.
They can nevertheless be overridden at the project level.
Automatic synchronization of Semantic Browser is deactivated by default.
A Capella Configuration Project cannot be shared through several users by exporting it to the Server.
To use the Capella Configurability feature in Team for Capella, the Capella Configuration Project need to be referenced on each Team for Capella connection project.
The client behavior can also be set using VM arguments added to the capella.ini or in a launch config.
Change management is about adding extra information about users activities while modeling. They can be related to any aspect of the modeling session (current tasks, current teams, a more detail explanation etc...). Its integration in Team for Capella provides a way to:
Those information are attached to a commit. They can be visualized in the Commit History View by selecting each commit. Be aware that some commits are made by modeler itself. They do not represent commits that users would have made. They are tagged with the property team-technical-commit : true.
The main documentation of the Commit History View is available in the corresponding section of the Sirius Collaborative Mode user documentation.
Note that some actions has been hidden in Team for Capella, such as Create Branch... and Checkout popup menus. You can enable the CDO Actions capability in the Preferences page to access them.
In Team For Capella there is 2 ways to fill up the extra information attached to a commit.
The following sections explain the different facilities used to compute a commit description.
This strategy uses the history of the Team for Capella Server to guess what information the user wants to enter. Before each commit, it will look for the last commit done by the current user (that is not a Technical commit ). For example, lets says the current user is user1 and the server has the following history:
Date | User | Description |
---|---|---|
31/08/2017 16:00 | User1 | Update Xmi Ids
team-technical-commit : true |
31/08/2017 15:59 | User2 | Activity 2
Doing some work |
31/08/2017 16:58 | User1 | Activity 1
Doing some other work |
31/08/2017 16:57 | User1 | Activity 1Doing some other work |
If user1 saves the model, the framework would compute the following commit description:
Activity 1
Doing some other work
If he has activated the preference " Require description for commit actions" a dialog will open suggesting this message.
If not activated and the preference " Automatically use the pre-filled description when none is provided " is activated the commit will be made using this message as commit description.
In order to activate this strategy go to the preference page: Sirius > Team collaboration. Select Pre-fill commit description and select CDO History. Be aware that this mode only works on an authenticated Team for Capella Server.
This strategy uses Mylyn tasks to compute a commit description. Using the template defined in " Preference > Mylyn > Team", it computes a commit description from an active and not completed task. This strategy is really handy when using " Automatically use the pre-filled description when none is provided " preference. Indeed, with this configuration the user only has to activate or deactivate Mylyn tasks to have a clean history filled up with extra information.
In order to activate this strategy go to the preference page: Sirius > Team collaboration. Select Pre-fill commit description and select Mylyn.
Once history filled up with a meaningful information, the user might want to use it. To do so, he can export it to a model format using the " Export Metadata actions" from the Commit History view.
Another way to export metadata is by using the importer.
Once the information exported to a file, a model editor can be used to browse the different activities that occurred on the server. Using the " text " tab, he has access to a textual representation of the current model. He can even request it using Aql requests (more documentation here). Here is a representation of the metamodel:
For example he might want to request all users that have participated to a given activity. To do so he could use the following AQL request:
aql:self.activities->select(a|a.description.contains('Activity 1'))->collect(a|a.userId)
Using a dedicated format in the commit description (defined here), the user can even creates its own custom properties. Each one of them will be transformed into ActivityProperty. It might be used to create more advanced Aql requests .
When using a server that is configured with Audit mode it is possible to compare commits between each other. To do so the user should open the Commit History view. From there he can select one or two commits and use "Compare with each other" or "Compare with previous" menus. The comparison is done using Diff/Merge framework (see document here).
Limitation: The Commit History View allows to merge consecutive commits with the same user and description in only one visible commit. The Diff/Merge actions are not enabled on this kind of commit. You have to deactivate first the "Merge Consecutive Commits" option to make those actions enable.
In the picture above the differences are stored under 2 roots each representing a resource.
Be aware that at this time the integration between Team for Capella and Diff/Merge do not offers merge functionalities.
Team for Capella installation can be completed with Jenkins used as a scheduler for various job managing the Capella project shared on a CDO server. Indeed, Project Administrators will find functionalities concerning:
Team for Capella provides many applications (Backup, diagnostics...) manageable by Jenkins jobs in order to have a web interface for managing your shared projects. You can refer to the documentation for the installation of Jenkins.
The full Jenkins documentation can be found at the following address: https://www.jenkins.io/doc/.
By default it is available on the port 8036: when logged on the computer running the Scheduler, type the following address in a web browser:
By default, for all jobs, the last 100 job executions (called "builds" in Jenkins) results are kept by Jenkins (build’s artifacts and logs). Note that all these jobs can be changed with the Jenkins application.
The default view is the "Server Management" one.
This job lists the currently active repositories on the server.
The list result is logged in the console output of the job.
These repositories can be stopped by using the Server – Stop repository job.
This job lists :
This job starts the server. By default, this job starts the server every Saturday at 06:00, It never stops (and must not be aborted) except if "Server – Stop" is launched.
This job starts a repository on the server, that was previously stopped by the job «Server - Stop repository». When a server starts, all its repositories starts as well.
This job stops the server. By default, this job stops the server every Saturday at 05:00 (and is restarted one hour later by the previous job).
This job stops an active repository on the server.
Use Server – List active repositories to lists all active repositories.
The stopped repository cannot be reached and remote projects existing in this repository cannot be modified. Using the Database – Backup job will not backup the stopped repository.
The server will still be running and the other non-stopped repositories will still be reachable.
This job is only present in the commercial versions of Team for Capella.
It allows to manage the license server directly from the Scheduler. It is disabled by default.
This job does a dump of the database into a zip file and keep it as an artefact of the build. By default, it is launched automatically 3 times a day (07:30, 12:30 and 20:30) from Monday to Friday.
Note that this job will perform a backup of the whole server. If several repositories are started, it creates one zip file per repository.
We strongly recommend to have one database path per repository. See How to Add a New Repository
This job is intended to restore the database from a previously backed up database.
The backup folder is a result of the "Database – Backup" job.
If you want to restore only one repository, move all other archives out of the backup folder to keep the one specific to your repository.
It executes the exporter application to delete a project from the given repository without any user interaction.
This job will delete a project according to its name on the server, given as parameter.
It executes the exporter application to export projects automatically from a local folder (or archive) on the server without any user interaction.
This job will export the projects from a specific source. This source can be
This job needs to be configured to specify the folder.
If the job fails, you may have a wrong folder path or none representation files have been found in folder.
It executes the importer application to import projects automatically from a server without any user interaction and archives them as Job’s artifacts. By default, it is launched automatically every hour from 07:00 to 21:00 Monday to Friday.
This job will import the projects for a specific repository. It needs to be configured to specify the repository and optionally, a specific project list to import. If you have many repositories, you ought to have as many "import projects" jobs that may start at the same time. So you need to configure the number of job executors. Go to Manage Jenkins > configure systems menu if number of T4C repository have been extended: # of executors ≥ =nb of repo +3
This job is by default configured to use the Snapshot import strategy. Refer to the Importer strategies documentation for more details.
If the job fails, you may have corrupted data in your database that could prevent you to get imported projects. Then you could have data loss if one day you really need those imported projects. In that case, you may:
This jobs extracts the user profile model from the database and saves it locally in the archiveFolder.
It is disabled by default and must be enabled only if the repository is configured to use the "User Profiles" access control mode.
|
These jobs can not be started if the authenticator is based an OpenID Connect. You must start the server with another mode of authentication or no authentication. |
This maintenance job needs to be manually launched. This job runs a diagnostic in order to detect inconsistencies described in Server Administration / Administration Tools / Repository maintenance application.
The diagnostic result is logged in the console output of the job. It is kept as an artifact of the job result.
The diagnostic is run for a specific repository and need to be configured according to your repository name.
This maintenance job needs to be manually launched. It is recommended to launch the Repository – diagnostic job first.
It runs a diagnostic in order to detect inconsistencies described in Server Administration / Administration Tools / Repository maintenance application. Then, it launches the maintenance tasks if some managed issues are detected: it will backup the server with capella_db command, perform the required changes on the database and close the server. The steps are logged in the console output of the job and the corresponding log file is kept as an artifact of the job result.
The maintenance is run for a specific repository and need to be configured according to your repository name.
This jobs executes the Tools Credentials Application to manage the access tokens to the Rest API, for a specific user.
Launching a build requires setting values for four parameters:
This jobs executes the Tools Credentials Application to manage the Rest API registered users.
Launching a build requires setting values for five parameters:
This job executes the credentials application to clear credentials in Eclipse Secure Storage, allowing the importer application to connect to the rest admin server or to connect to a CDO repository.
As credentials needs to be associated with a repository, when this job is executed it will start by asking to fill the following parameters:
Note that credentials are required only with the Connected import strategy. See Importer strategies for more details.
This jobs is the opposite of the previous one, it stores the credentials in Eclipse Secure Storage, allowing either to connect to the rest admin server or to connect to a CDO repository.
As credentials needs to be associated with a repository, when this job is executed it will start by asking to fill the following parameters:
Note that credentials are required only with the Connected import strategy. See Importer strategies for more details.
This view contains templates of jobs which are disabled by default. They are provided as an example to show how to create backup jobs whose result is pushed to a Git repository.
See each job description in the Scheduler to see how to use them.
The Jenkins installation should have included the creation of a new service (named Jenkins) that automatically starts Jenkins with the system.
If you do not have the Jenkins service, go to Jenkins (or start it manually from its installation folder), go to the Manage Jenkins configuration page and select Install as a Windows service.
The Jenkins service can be started or stopped by using the systemctl command:
systemctl start jenkins
To start the Team for Capella Server automatically when the scheduler starts (i.e.: launch the Start server job), go to the configuration page of the Start server job and then check the box "Build when job nodes start", the "Quiet period" parameter allows to delay the start:
Every job contains in its configuration page a text field called "Schedule". Use this field to change the Job’s scheduling configuration. It is visible on the previous screenshot.
To stop the Jenkins scheduler, go to the Manage Jenkins page and select Prepare for Shutdown
This allow to send a warning to anyone currently connected to the scheduler and end the jobs currently running or in queue. After that, you can simply go to the Windows services and stop the Jenkins service.
By default in the scheduler, the security checks are disabled. This means that Jenkins is available to anyone who can access Jenkins web UI without asking for their login and password.
It is possible to configure security within Jenkins in order to define a group of users, which are allowed to log in to Jenkins or to check user passwords against the username in LDAP or in Jenkins' own user database. To do that, the procedure is the following:
You can also decide to use the Jenkins' own user database:
More details can be found in https://www.jenkins.io/doc/book/system-administration/security/ .
A Jenkins plugin allows the authentication to be handle by
MS Azure AD. This plugin is automatically installed by the
Jenkins plugins for Team for Capella installation script but if you have installed Jenkins by another mean, it can be installed as follows:
First, go to
Manage Jenkins > Manage Plugins. On the
Available tab, look for
Azure AD Plugin. Before installing it, hover your mouse over the label and open
the link on a new tab. This will open a documentation page useful later. Now, check the plugin and press the download and install button. Restart Jenkins.
Once restarted, Jenkins is ready to be configured for an authentication with Azure AD. For that, go to
the tab that was opened previously and follow the documentation. There are two parts for this configuration, one in Azure AD and one in Jenkins.
Note that on the Jenkins setting part, when asked to fill the
Tenant this correspond to the
Directory (tenant) ID in your Azure AD application. It is not necessarily the same value as in the CDO server configuration files (for instance, the value "organizations" can be used instead of Tenant ID for the purpose of OpenID discovery mechanism). Also, a test user is asked in order to verify the authentication parameters. This is not the name that is needed here but the
User Principal Name or the
Object ID of this user. Note that, if you want to have a different list of users having access to Jenkins (compared to the users that have access to the CDO server), you can create a new application on Azure dedicated to the scheduler access (Jenkins).
I have 2 modeling projects (or more) working with Team for Capella and I want to isolate them in Jenkins (a person logged in Jenkins must see only Jenkins jobs dedicated to its project).
The proposed solution uses the internal Jenkins user database but is applicable with some changes to use a LDAP server.
Note that this section be adapted for different situations: multiple projects, multiple repositories or even multiple servers managed yby the same Scheduler.
When Jenkins is started for the first time, it contains all necessary jobs:
Let’s say the "Projects – Import" job will be used for Project 1. So, rename it to "Project 1 – Import":
Now we will create jobs for Project 2. Click on the "New Item" in the "Backup and Restore" tab.
Then select "Copy existing Job"). Copy the "Project 1 – Import" job and rename it into "Project 2 – Import".
The result is the following:
|
Project 1 and Project 2 jobs have to be configured correctly to be used (their build step must be modified to add -projectName ProjectXName) and number of executors has to be increased. |
Go to "Manage Jenkins" / "Configure Global Security", set parameters as shown in the screenshot:
Do the following changes in the table:
The table must be as follows:
Click on "Save".
Access rights are now activated:
Create the "SuperAdmin" account and use it to log in Jenkins.
Go to the "Configuration" page of a job dedicated to Project 1 and check "Enable project-based security":
Do the following changes in the table:
Do the same work on all jobs linked to Project1.
Repeat all above actions with "Project2Admin" and all jobs linked to Project2.
An admin/user dedicated to a project will not be allowed to see information on jobs of other projects.
For example, when logged as Project2Admin and with Project1’s server running. Project2Admin will see:
The Team for Capella scheduler (Jenkins) can be configured for a maximum number of build processes that can execute concurrently.
In order to ensure the correct operation of all Team for Capella server jobs it is vital to set this maximum number of build processes correctly!
For example, if the server machine is to run 5 Team for Capella server processes, then the value of # of executors would need to be set to 6 .
WARNING: setting this configuration parameter incorrectly can lead to complete system hangs, no Capella backups, etc!
Each Team for Capella server process relies on two network ports – a server port and a console port. In order to avoid confusion by using "magic" numbers for the ports within the scheduler jobs, it is best to create environment variables for these.
Note: the hyphen character is not allowed within the names of environment variables. Therefore, in the above example, although the repository names is test-01, within the environment variable name the hyphen is replaced by an underscore, i.e. Test_01
cd
TEAMFORCAPELLA_APP_HOME/tools
command.bat -consoleLog localhost
TEAMFORCAPELLA_CONSOLE_PORT_TEST_01 cdo stopserver
del *-sql.zip
cd
TEAMFORCAPELLA_APP_HOME/tools command.bat -consoleLog localhost
TEAMFORCAPELLA_CONSOLE_PORT_TEST_01 capella_db backup '
WORKSPACE'
By default Jenkins will be launched using the java executable found in Windows\System. If the java version from this java executable is different from the key Java Runtime Environment\CurrentVersion in the registry, the service cannot be installed. If this problem is encountered, there are 2 solutions:
By default, the connection used to launch command by jobs has a timeout of two minutes. However, in specific cases (like saving a large volume of modifications) the user may want to increase this timeout value. If the user launch importer or maintenance job (which refers to importer or maintenance application), he can increase this timeout by defining a new parameter -consoleTimeout (see Importer parameters documentation). If the user launch an other job (which refer to the command application), he can specify the timeout for the connection with a value in milliseconds just after the port number argument.
The importer is an application used to extract the project from the cdo server database to a local folder. It produces as many zip file as modeling project. It can also be used to import the user profiles model.
The importer also extracts information from the CDO Commit history in order to produce a representation of the activity made on the repository. This information is denominated Activity metadata. See help chapter The commit history view and Commit description preferences for a complete explanation. By default, the importer will extracts Activity Metadata for every commits on the repository. Be aware that the parameter -projectName has no impact on this feature. It will also export commits that do not impact the selected project. Still, it is possible to specify a range of commit using the parameters -to and -from.
Several import strategies are supported by the Importer application:
cdo export
command on the server osgi console.See also Projects - Import job documentation.
|
Important: Importer.bat file uses -vmargs as a standard eclipse parameter. Eclipse parameters that are used by importer.bat override the value defined in capella.ini file. So if you want to change a system property existing in capella.ini (-vmargs -Xmx3000m for example) do not forget to do the same change in importer.bat. |
The importer needs credentials to connect to the CDO server if the server has been started with authentication or user profile. Credentials can be provided using either -repositoryCredentials or -repositoryLogin and -repositoryPassword parameters. Credentials are required only for Connected import (see Importer strategies section above for more details). Here is a list of arguments that can be set to the Importer (in importer.bat or in a launch config):
Arguments | Description |
---|---|
-repositoryCredentials | Login and password can be provided using a credentials file. It is the recommended way for confidentiality reason. If the credentials does not contain any password, the password will be searched in the eclipse secure storage. See
how to set the password in the secure storage
This parameter must not be used with -repositoryLogin or -repositoryPassword parameters else the importer will fail. To use this property file
Note: Credentials are required only for Connected import (see Importer strategies section above for more details). |
-repositoryLogin | The importer needs a login in order to connect to the CDO server if the server has been started with authentication or user profile.
-repositoryPassword must not be used with -repositoryCredentials else the application will fail. Note: Credentials are required only for Connected import (see Importer strategies section above for more details). |
-repositoryPassword | This parameter is used to provide a password to the importer accordingly to the login.
If -repositoryPassword is not used, the password will be searched in the eclipse secure storage. See how to set the password in the secure storage -repositoryPassword must not be used with -repositoryCredentials else the application will fail. Warning: some special characters like double-quote might not be properly handled when passed in argument of the importer. The recommended way to provide credentials is through the repositoryCredentials file or the secure storage. Note: Credentials are required only for Connected import (see Importer strategies section above for more details). |
-hostname | Define the team server hostname (default: localhost). |
-port | Define the team server port (default: 2036). |
-consolePort | Define the team server console port (default: 12036). |
-consoleTimeout | Define the connection timeout in milliseconds (default: 120000 ms). |
-connectionType | The connection kind can be set to tcp or ssl (keep it in low case) (default: tcp) |
-httpLogin | Importer application will trigger an Http request. This argument allows to give a login to identify with on the Jetty server. |
-httpPassword | Importer application will trigger an Http request. This argument allows to give a password to authenticate with on the Jetty server. |
-httpPort | Importer application will trigger an Http request. This argument allows to give a port to communicate with on the Jetty server. |
-httpsConnection | Importer application will trigger an Http request. This boolean argument specifies if the connection should be Https or Http. |
-importType | The backup is available in three different modes:
PROJECT_ONLY to only export the shared modeling projects from the CDO repository to local; SECURITY_ONLY to only export the shared user profile project from the CDO repository to local; ALL to export both. (default: PROJECT_ONLY) |
-repoName | Define the team server repository name (default: repoCapella). |
-projectName | By default, all projects are imported (with the right -importType parameter). Argument "-projectname X" can be used to import only project X (default: *). |
-runEvery | Import every x minutes (default -1: disabled). |
-archiveFolder (deprecated) | Define the folder where to zip projects (default: workspace). This argument is deprecated. Instead you should use -outputFolder (and -archiveProject=true but true is its default value). |
-outputFolder | Define the folder where to import projects (default : workspace). |
-logFolder | Define the folder where to save logs (default : -outputFolder). |
-archiveProject | Define if the project should be zipped (default : true). Each project will be zipped in a separate archived suffixed with the date. Some additional archives can also be created:
Note: Some library resources may not be referenced by the current projet and so not included in the zip.
|
-overrideExistingProject | If the output folder already contains a project with the same name this argument allows to remove this existing project. |
-closeServerOnFailure | Ask to close the server on project import failure (default: false). If the server hosts several repositories, it is better to use the parameter -stopRepositoryOnFailure. |
-stopRepositoryOnFailure | Ask to stop the repository on project import failure (default: false).
Note: it is currently not possible to restart a single repository, if defined in cdo-server.xml. To restart the stopped repository, stop and restart the server. |
-backupDBOnFailure | Backup the server database on project import failure (default: true). |
-checkSize | Check project zip file size in Ko under which the import of this project fails (default: -1(no check)). |
-checkSession | Do some checks and log information about each imported project (default: true).
|
-errorOnInvalidCDOUri | Raise an error on cdo uri consistency check (default: true). |
-addTimestampToResultFile | Add a time stamp to result files name (.zip, logs, commit history) (default: true). |
-optimizedImportPolicy | This option is no longer available since 1.1.2. |
-maxRefreshAttemptBeforeFailure | The max number of refresh attempt before failing (default: 10). If the number of attempts is reached, the import of a project will fail but as this is due to the activity of remote users on the model, this specific failure will not close the repository or the server even with "-stopRepositoryOnFailure" or "-closeserveronfailure" set to true. |
-timeout | Session timeout used in ms (default: 60000). |
-exportCommitHistory | Whether the Commit History metadata should be exported (default: true). If the value is false, all other options about the commit history will be ignored. You should also update the "Jenkins Text Finder" configuration to avoid unstable build. See Jenkins Text Finder configuration section |
-from | The timestamp specifying the date from when the metadata will be exported. If omitted, it exports from the first commit of the repository. The timestamp should use the following format yyyy-MM-dd'T'hh-mm-ss.SSSZ. For example, for the date 03/08/2017 10h14m28s453ms on a time zone +0100 use the argument "2017-08-03T10:14:28.453+0100". The timezone may be omittted(format without Z part). In this case, the time zone is the time zone of the system. The timestamp can also be computed from an Activity Metadata model. In that case, this parameter could either be an URL or a path in the file system to the location of the model. If the date corresponds to a commit, this commit is included. Otherwise the framework selects the closest commit following this date. In case of using a previous activity metadata, the last commit of the previous export is also included. |
-to | The timestamp specifying the latest commit used to export metadata. If omitted, it exports to the last commit of the repository. The timestamp should use the following format yyyy-MM-dd'T'hh-mm-ss.SSSZ. For example, for the date 03/08/2017 10h14m28s453ms on a time zone +0100 use the argument "2017-08-03T10:14:28.453+0100". The timezone may be omittted(format without Z part). In this case, the time zone is the time zone of the system. The framework selects the closest commit preceding this date. Be careful, due to technical restrictions, this parameter only impacts the range of commit for exporting activity metadata from the CDO server. Using this parameter will not export the version of the model defined by the given date. |
-importCommitHistoryAsText | Import commit history in a text file using a textual syntax (default: false). The file has the same path as the commit history model file, but with txt as extension. |
-importCommitHistoryAsJson | Import commit history in a json file format (default: false). The file has the same path as the commit history model file, but with json as extension. |
-includeCommitHistoryChanges | Import the commit history detailed changes for each commit done by a user with one of the save actions (default: false). The changes of commits done by wizards, actions and command line tools are not computed, those commits have a description which begins by specific tags like [Export], [Delete], [Maintenance], [User Profile], [Import], [Dump]. This option is applied for all kinds of export of the commit history (xmi, text or json files). Warning about the importer performance: if this parameter is set to true the importer might take more time particularly if the history of commits is long. |
-computeImpactedRepresentationsForCommitHistoryChanges | Compute the impacted representations while exporting changes (default: false). Warning about the importer performance: if this parameter is set to true the importer might take more time particularly if the history of commits is long. For each commit with changes to export, it will compute the impacted representations. |
-XMLImportFilePath | This option allows to perform the import based on an XML extraction of the repository. It is mandatory for Offline and Snapshot imports, see the Importer strategies section for more details. It is recommended to provide an absolute path. Some arguments related to the server connection will be ignored. Only the arguments -outputfolder and -repoName are mandatory. |
-cdoExport | This option allows to send a snapshot creation command to the server before performing the import as described in
Importer strategies section. (default: false). The -XMLImportFilePath argument is mandatory since the path is used to create and consume the snapshot.
Note: The cdo export command takes the lock on projects aird resources. This strategy makes it possible to prevent a concurrency save from connected users. If the lock cannot be acquired after several attempts, an error message is logged and the import is cancelled.
|
-archiveCdoExportResult | This option defines if the XML file resulting from the cdo export command launched by the importer in intermediate step (if -cdoExport is true) should be zipped (default : false). If option is true, the XML file zip is created in the "Output folder" (see -outputFolder documentation) and the XML file is then deleted. -archiveCdoExportResult must not be used without -cdoExport argument to true otherwise the application will fail. Indeed the application will only archive the XML file if it has produced it. |
-help | Print help message. |
|
If the server has been started with user profile, the Importer needs to have write access to the whole repository (including the user profiles model). See Resource permission pattern examples section. If this recommendation is not followed, the Importer might not be able to correctly prepare the model (proxies and dangling references cleaning, ...). This may lead to a failed import. |
|
The importer uses the default configuration of Capella and does not need its own configuration area. For this to work properly, the importer needs to have read/write permission to the configuration area of Capella, otherwise it can end up with some errors about access being denied. A common situation where the importer can be found in this situation is when the Scheduler is launched as a Windows service. In this case, the user account executing the service is not necessarily configured to have the read/write permission to Capella's configuration area. If somehow you cannot give the read/write permission to the importer, a workaround is to provide it a dedicated configuration area by adding the following arguments at the end of importer.bat file: -Dosgi.configuration.area="path/to/importer/configuration/area" and if necessary, update the existing argument -data importer-workspace to point to a location with read/write permission. |
The job contains a post action that verifies that the commit History metadata text file is generated with the parameter exportCommitHistory set to true by default:
If you change the parameter exportCommitHistory to false, the build will become unstable because of this configuration. So you should deactivate the option "Unstable if found" to avoid this warning that does not make sense with this parameter set to false. Don't forget to set it back if you set the value to true again.
Thanks to the Jenkins Text Finder post-build action, if the logs of a build contains the text Warning, the build is marked as unstable (with a yellow icon). You can go further and be notified by email in that case. In the Project - Import configuration page, scroll down or select the tab Post-build Actions. There click on the Add post-build action button and choose E-mail notification.
On this new action, you just need to add the e-mails to be notified in case of unstable build.
The importer does not use the same credentials as the user. It is stored in a different entry in the Eclipse 'Secure Storage'. Storing and clearing the credentials requires a dedicated application that can be executed as an Eclipse Application or using a Jenkins job.
example1: import project
importer.bat -nosplash -data importer-workspace
-closeServerOnFailure true
-backupDbOnFailure true
-outputFolder C:/TeamForCapella/capella/result
-connectionType ssl
-checkSize 10
example2: import user profile model
importer.bat -nosplash -data importer-workspace
-closeServerOnFailure false
-backupDbOnFailure false
-outputFolder C:/TeamForCapella/capella/result
-connectionType ssl
-checkSize -1
-importType SECURITY_ONLY
The exporter is an application used to export all projects from a given local folder into a remote repository. It can also be used to export the user profiles model.
The Exporter application support one strategy :
See also Projects - Export job documentation.
|
Important: exporter.bat file uses -vmargs as a standard eclipse parameter. Eclipse parameters that are used by exporter.bat override the value defined in capella.ini file. So if you want to change a system property existing in capella.ini (-vmargs -Xmx3000m for example) do not forget to do the same change in exporter.bat. |
The exporter needs credentials to connect to the CDO server if the server has been started with authentication or user profile. Credentials can be provided using either -repositoryCredentials or -repositoryLogin and -repositoryPassword parameters. Here is a list of arguments that can be set to the Exporter (in exporter.bat or in a launch config):
Arguments | Description |
---|---|
-repositoryCredentials | Login and password can be provided using a credentials file. It is the recommended way for confidentiality reason. If the credentials does not contain any password, the password will be searched in the eclipse secure storage. See
how to set the password in the secure storage
This parameter must not be used with -repositoryLogin or -repositoryPassword parameters else the exporter will fail. To use this property file
|
-repositoryLogin | The exporter needs a login in order to connect to the CDO server if the server has been started with authentication or user profile.
-repositoryLogin must not be used with -repositoryCredentials else the application will fail. |
-repositoryPassword | This parameter is used to provide a password to the exporter accordingly to the login.
If -repositoryPassword is not used, the password will be searched in the eclipse secure storage. See how to set the password in the secure storage -repositoryPassword must not be used with -repositoryCredentials else the application will fail. Warning: some special characters like double-quote might not be properly handled when passed in argument of the exporter. The recommended way to provide credentials is through the repositoryCredentials file or the secure storage. |
-hostname | Define the team server hostname (default: localhost). |
-port | Define the team server port (default: 2036). |
-consolePort | Define the team server console port (default: 12036). |
-consoleTimeout | Define the connection timeout in milliseconds (default: 120000 ms). |
-connectionType | The connection kind can be set to tcp or ssl (keep it in low case) (default: tcp) |
-repoName | Define the team server repository name (default: repoCapella). |
-sourceToExport | Define the path of folder containing projects to export.
This folder can be :
|
-logFolder | Define the folder where to save logs (default : -outputFolder). |
-overrideExistingProject | If the remote repository already contains a project to export with the same name this argument allows to remove this existing project (default: false). |
-mergeDifferenceOnExistingProjects | If -overrideExistingProject is set to true (default: false), this argument allows to select one of the two following override strategies:
|
-overrideExistingImage | If the remote repository already contains image with the same name, this argument allows to ignore and override it.. |
-closeServerOnFailure | Ask to close the server on project export failure (default: false). If the server hosts several repositories, it is better to use the parameter -stopRepositoryOnFailure. |
-stopRepositoryOnFailure | Ask to close the repository on project export failure (default: false).
Note: it is currently not possible to restart a single repository, if defined in cdo-server.xml. To restart the stopped repository, stop and restart the server. |
-addTimestampToResultFile | Add a time stamp to result files name (.zip, logs, commit history) (default: true). |
-timeout | Session timeout used in ms (default: 60000). |
-httpLogin | Exporter application will trigger an Http request. This argument allows to give a login to identify with on the Jetty server. |
-httpPassword | Exporter application will trigger an Http request. This argument allows to give a password to authenticate with on the Jetty server. |
-httpPort | Exporter application will trigger an Http request. This argument allows to give a port to communicate with on the Jetty server. |
-httpsConnection | Exporter application will trigger an Http request. This boolean argument specifies if the connection should be Https or Http. |
-help | Print help message. |
|
If the server has been started with user profile, the Exporter needs to have write access to the whole repository (including the user profiles model). See Resource permission pattern examples section. If this recommendation is not followed, the Exporter might not be able to override existing projects on remote for example. This may lead to a failed export. |
|
The exporter uses the default configuration of Capella and does not need its own configuration area. For this to work properly, the exporter needs to have read/write permission to the configuration area of Capella, otherwise it can end up with some errors about access being denied. A common situation where the exporter can be found in this situation is when the Scheduler is launched as a Windows service. In this case, the user account executing the service is not necessarily configured to have the read/write permission to Capella's configuration area. If somehow you cannot give the read/write permission to the exporter, a workaround is to provide it a dedicated configuration area by adding the following arguments at the end of exporter.bat file: -Dosgi.configuration.area="path/to/exporter/configuration/area" and if necessary, update the existing argument -data exporter-workspace to point to a location with read/write permission. |
The exporter does not use the same credentials as the user. It is stored in a different entry in the Eclipse 'Secure Storage'. Storing and clearing the credentials requires a dedicated application that can be executed as an Eclipse Application or using a Jenkins job.
example1: export project
exporter.bat -nosplash -data exporter-workspace
-closeServerOnFailure true
-connectionType ssl
-sourceToExport C:\Users\me\Documents\runtime-T4C
As any eclipse application, Team For Capella uses preferences to manage the behavior of the application.
There are many preference scopes including the default and the instance scope. Instance scope, if set, has the priority to the default scope. The default scope is the value by default provided by the application. The instance scope corresponds to the preferences a user can change with the Preferences dialog box accessible with the menu Windows/Preferences. These preferences are stored in the user's workspace. For more details, refer to the eclipse Preferences documentation
For more information about the preferences used for Team For Capella, refer to the client preferences documentation.
The Administrator, in charge of customizing the product functionalities, may want to
To initialize the default preferences without having to provide a plug-in, you can use the pluginCustomization Eclipse parameter. Refer to Eclipse Runtime documentation for more information.
The principle is to declare a property file which contains pairs of key/value. The key is the qualified name of the preference and the value is the value of the preference.
Preferences have a default value that is associated to the Team for Capella application. This chapter explains how to change their default value. Nevertheless, the user has the ability to use a different value, than the default one, using the Preferences dialog box. This will set a value for the scope corresponding to the user workspace. The workspace scope has a higher priority than the default scope.
Sirius Preferences |
Preference keys |
Default value if not set |
Sirius "Automatic Refresh" and "Do refresh on representation opening" |
org.eclipse.sirius.ui/PREF_REFRESH_ON_REPRESENTATION_OPENING=<boolean value>
|
true |
Team collaboration Preferences |
Preference keys |
Default value if not set |
Check by default the check button in the "Capella Connected Project" wizard to have the Sirius Refresh preferences specific to the connected project that is being created.
|
fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_PROJECT_SPECIFIC_SETTINGS_DEFAULT_VALUE=<boolean value>
|
true |
Connection Url
|
|
|
Commit history view
|
|
|
Release all explicit locks after committing |
fr.obeo.dsl.viewpoint.collab/PREF_RELEASE_EXPLICIT_LOCK_ON_COMMIT=<boolean value> |
false |
Display Write Permission Decorator |
fr.obeo.dsl.viewpoint.collab/PREF_DISPLAY_WRITE_PERMISSION_DECORATOR=<boolean value> |
true |
Ability to lock the semantic element at representation creation or move |
fr.obeo.dsl.viewpoint.collab/PREF_LOCK_SEMANTIC_TARGET_AT_REPRESENTATION_LOCATION_CHANGE=<boolean value> |
true |
Sometimes, the value of the preference is complex. It is the case for some preferences visible in Preferences dialog box. To know the value of a particular preference:
Once you have configured the preferences using the Preference dialog box, you have to export the preferences to a text file:
Then each user will have to import the preference file to set the preferences values for his workspace.
|
|
System administrators handle the installation, configuration and authentication on the CDO server that is used for sharing Capella projects. For these activities, Team for Capella provides the following functionalities in Eclipse or as jobs which can be installed in a Jenkins used as a scheduler:
Team for Capella bundles and installation guide are available at https://www.obeosoft.com/en/team-for-capella-download.
The documentation of Team for Capella presents many applications (Backups, diagnostics...) that can be scheduled with Jenkins in order to have a centralized platform to manage your shared projects.
It is recommended to install a 2.375.x LTS release. Team for Capella 6.1.0 has been tested with Jenkins 2.375.3 LTS release.
If you choose to deploy a more recent version, we strongly recommend to use a release from the LTS (Long Term Support) stable releases stream available at Jenkins.io.
|
The default Jenkins port is 8080. But it is recommended to set the port to 8036 (In the previous Team for Capella installation, the embedded Jenkins was deployed on port 8036). Otherwise, there will be a conflict with the REST admin server which default port is 8080. The port can be chosen in the Jenkins installation wizard. This following documentation will often reference the port 8036. |
The Jenkins 2.375.3 LTS Windows installer can be downloaded from this link.
If you choose to deploy a more recent version, we strongly recommend to use a release from the LTS (Long Term Support) stable releases stream available at Jenkins.io.
Once downloaded, proceed to the installation.
It is recommended to install the Jenkins service (automatic loading on restart) and the suggested plugins.
The Jenkins 2.375.3 LTS packages for Linux can be downloaded from the LTS Releases package repository corresponding to the targeted distribution, see
See this link.
The scheduler has been tested on RedHat and Debian based distributions. The Jenkins installation instructions are available at Installing Jenkins: Linux
The Server and Importer applications require a display to be executed properly. An Xvnc server needs to be installed on the Linux server.
On Debian based distributions, you can install either tigerVNC or TightVNC:
sudo apt install tightvncserver
sudo apt install tigervnc-standalone-server
On RedHat based distributions:
dnf install tigervnc-server
In addition, make sure that the Xvnc jenkins plugin is installed on the Jenkins (it is installed by install-TeamForCapellaAppsOnJenkins.sh).
Note: Make sure that the jenkins user has read, write and execution permission on the TeamForCapella root folder.
At the end of the installation, your web browser should be displaying Jenkins.
Once Jenkins is installed, you can run our installation script that will install all the jobs allowing the Jenkins scheduler to manage the different Team for Capella applications. This script also downloads all the Jenkins plugins required for the different jobs.
In your Team for Capella installation folder, go to the tools/resources/scheduler folder. In this folder, you will find a script install-TeamForCapellaAppsOnJenkins.bat (or install-TeamForCapellaAppsOnJenkins.sh for Linux), edit this file in a text editor.
Not only does it contains all the required commands to download and install the plugins, but there are some parameters for accessing Jenkins to fill in. These parameters are:
As documented in https://www.jenkins.io/doc/book/managing/cli/, you can get your API token from /me/configure page of your Jenkins. The script will automatically download the Jenkins CLI client and use it to install the plugins. Then it will create all the Team for Capella jobs and sort them into different views. Finally, once the script finished, you only need to restart Jenkins. The simplest way is to use the /restart page of your Jenkins. On Windows, if you have installed Jenkins, to restart it, you could also use your system Services window.
The dashboard will present all the Team for Capella applications.
Note that the plugins versions were chosen at the time of the release of the Team for Capella version you are working on. Once the script executed, it is recommended to keep Jenkins up to date and also to check for new updates of the installed plugins. Go to Manage Jenkins > Manage Plugins. On the Update tab, select all plugins and then click on the Download now and install after restart.
|
These jobs executes Team for Capella applications, therefore Jenkins requires a global environment variable referencing the location of your team for Capella installation:
Note that the development team is working on improving the installation script to add this variable, but some Jenkins APIs have been removed for security reasons as it was seen as code injection. |
Additional configuration steps are recommended, see Executors, Locale, Default view and Display Job Description in miscellaneous settings section.
Restart Jenkins or its service after this configuration phase.
If you do not wish to install the Team for Capella applications with the script, you can still proceed manually.
The first step is to install the required plugins. In your Team for Capella installation folder, go to the
tools/resources/scheduler folder, you will find two files with names starting with
RequiredPlugins.
They contains the same list of plugins, one lists them by name, the other one list them by URL to their .hpi.
You need to install all of them. Go to
Manage Jenkins > Manage Plugins to install them from the plugin manager.
Then restart Jenkins.
Now that the required plugins have been installed, the Team for Capella jobs can be deployed as well:
Restart Jenkins and now the dashboard will present all the Team for Capella applications.
|
These jobs executes Team for Capella applications, therefore Jenkins requires a global environment variable referencing the location of your team for Capella installation:
|
Finally, as there are many jobs, it will be easier to manage by grouping these applications by tabs:
As an example, you can order your tabs as follows:
Additional configuration steps are recommended, see Executors, Locale, Default view and Display Job Description in miscellaneous settings section.
Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins), edit jenkins.xml , then update the value of --httpPort in the <arguments> tag of of the service definition:
<executable>java</executable> <arguments> -some -arguments --httpPort=8036 -some -other - arguments</arguments>
Finally, go to Windows service, and restart the Jenkins service (or restart the Jenkins server if you launched it manually).
Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins), edit jenkins.xml , then update the value of the <id> and <name> tags of of the service definition:
<id>TeamForCapellaScheduler</id> <name>Team For Capella Scheduler</name>
Open a Command Prompt as administrator in this folder and execute the following commands
sc stop jenkins
sc delete jenkins
jenkins.exe install
jenkins.exe start
Finally, go to Windows service, and check that
The configuration file after a standard installation is located in:
/etc/default/jenkins
: for most of the Linux distributions.
/etc/sysconfig/jenkins
: for RedHat/CentOS distribution.
By default, the port is 8080:
HTTP_PORT=8080
The service has to be restarted after the port modification:
systemctl restart jenkins
It is possible to force Jenkins to use some specific folders. Go to the directory where you installed Jenkins (by default, it’s under Program Files/Jenkins), edit jenkins.xml , then complete the <arguments> tag of the service definition:
-Djava.io.tmpdir=%JENKINS_HOME%\temp
--extractedFilesFolder="%JENKINS_HOME%\temp"
Finally, go to Windows service, and restart the Jenkins service (or restart the Jenkins server if you launched it manually).
Open the jenkins configuration file (see the previous Change the Port Used by Jenkins paragraph for the configuration file location)
JAVA_ARGS="-Djava.io.tmpdir=$JENKINS_HOME/temp"
JENKINS_ARGS
variable:
--extractedFilesFolder=$JENKINS_HOME/temp"
It is recommended to check for updates. On the top-right area, Jenkins will show notifications if there are some updates or issues identified. Furthermore, when you select the Manage Jenkins menu, the top area will present updates or corrections that can be applied to Jenkins or its plugins. Depending on the importance it will be presented in different colors (red>yellow>blue). Most of the time, it is notifications about new updates but in any case, it is a good practice to check this page once in a while and follow what is presented.
The Jenkins service can be stopped and deleted using the following commands in a Windows Command Prompt:
sc stop jenkins
sc delete jenkins
The id of the service is jenkins by default but you might have changed it as described in Change the name and id of the Jenkins service section.
Jenkins can be completely removed from your system with the use of its Windows Installer.
In this document you will discover how to manage a Server supporting Collaborative Modeling features.
The main configuration file used by the Team for Capella Server is the cdo-server.xml file.
The Team for Capella Server bundle comes as a standard Eclipse application. In the installed package, locate the Configuration folder and open it.
In this folder, locate the cdo-server.xml file and open it.
Here is a commented extract of the ''‹cdo-server.xml›'' delivered with Team for Capella:
Highlighted elements can be changed to customize the Team for Capella Server.
Note that many repository configuration options can not be changed anymore after the repository has been started the first time or if some data have been exported once to the server. If you need to change something in this configuration afterwards, you should then first delete the database files (files with db extension). A typical example is changing the name of the repository. The only elements you can change in the configuration file afterwards are Type of access control : userManager, securityManager, ldap or none and the acceptor. |
To activate the authenticated server you have to set the line below in the
cdo-server.xml file before the
<store >
tag.
<userManager type="auth" description="usermanager-config.properties"/>
usermanager.properties
is a path to the authenticated server configuration file. The path can be absolute or relative to the cdo-server.xml file.
users.file.path=users.properties
# ldap configuration
auth.type=ldap
auth.ldap.url=ldap://127.0.0.1:10389
auth.ldap.dn.pattern=cn={user},ou=people,o=sevenSeas
auth.ldap.filter=
auth.ldap.tls.enabled=false
auth.ldap.truststore.path=
auth.ldap.truststore.passphrase=
# openID Connect configuration
#auth.type=openidconnect
#auth.openIDConnect.discoveryURL=https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
#auth.openIDConnect.tenant=organizations
#auth.openIDConnect.clientID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#auth.openIDConnect.technicalUsers.file.path=technicalUsers.properties
users.file.path
is the name of the file containing the users. This file has to be copied into the root server installation folder. You can add new users by modifying the
users.properties
file.
auth.xxx
corresponds to
the LDAP configuration or
the OpenID Connect configuration. The properties are prefixed by
auth. Beware to
uncomment at most the LDAP or the OpenID Connect configuration.
The file
users.properties
contains entries which keys are the logins and values are the passwords. Note that space must be escaped with
\ else it will be considered as a key-value separator.
Examples:
admin=admin
John\ Doe=secret
Note :
This is the default mode, when Team for Capella is installed the server is set with a file authentication configuration.
You must not escape spaces in the login field required to connect to remote model (see
Connect to remote model section).
The same applies when you create a new user through the
"security model" (see
Access Control section).
As access control modes are exclusive, other modes must be commented in the
cdo-server.xml file:
<!-- <securityManager type="collab" .../> -->
<!-- <authenticator type="ldap" .../> -->
The server must be restarted to take into account the modifications done in the cdo-server.xml file.
On Client side, use the User Management view available in all Team for Capella clients. When using this view, the server does not need to be restarted after changes in the user accounts
To activate the user profile server you have to set the line below in the
cdo-server.xml file before the
<store >
tag. The user profiles model is created at the first server launch.
Once activated, you must see this during the Team for Capella Server starting:
<securityManager type="collab" realmPath="userprofile-config.properties" />
userprofile-config.properties
is a path to the user profile configuration file. The path can be absolute or relative to the cdo-server.xml file.
realm.users.path=users.userprofile
administrators.file.path=administrator.properties
# ldap configuration
auth.type=ldap
auth.ldap.url=ldap://127.0.0.1:10389
auth.ldap.dn.pattern=cn={user},ou=people,o=sevenSeas
auth.ldap.filter=
auth.ldap.tls.enabled=false
auth.ldap.truststore.path=
auth.ldap.truststore.passphrase=
# openID Connect configuration
#auth.type=openidconnect
#auth.openIDConnect.discoveryURL=https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
#auth.openIDConnect.tenant=organizations
#auth.openIDConnect.clientID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
#auth.openIDConnect.technicalUsers.file.path=technicalUsers.properties
realm.users.path
is the name of the resource that contains the user profile model.
administrators.file.path
is a path to the administrators file. The path can be absolute or relative to the cdo-server.xml file. This file is only used to
initialize administrators in the user profile model during the first start of the repository with the User Profile mode enabled (repository creation for example). It is mandatory because the definition of the user profile can only be done by an administrator.
auth.xxx
corresponds to
the LDAP configuration or
the OpenID Connect configuration. The properties are prefixed by
auth. Beware to
uncomment at most the LDAP or the OpenID Connect configuration.
Be aware that once the server has been launched with the User Profile mode enabled, modifications on this file will have no effect. If you want to manage the list of administrators, please have a look at User Pofiles documentation and especially at the Promote a User to Super User section if you want to promote an existing user to administrator. On the other hand, you can also make backups (shared projects and User Profiles model), stop the server, delete the database, modify the administrators files, restart the server and re-export your data.
As access control modes are exclusive, other modes must be commented in the
cdo-server.xml file:
<!-- <userManager type="auth" .../> -->
<!-- <authenticator type="ldap" .../> -->
The server must be restarted to take into account the modifications done in the cdo-server.xml file.
This configuration allows to work with a CDO server without authenticating from a client.
Just comment
securityManager,
userManager and
authenticator tags in the cdo-server.xml file:
<!-- <securityManager type="collab" .../> -->
<!-- <userManager type="auth" .../> -->
<!-- <authenticator type="ldap" .../> -->
The server must be restarted to take into account the modifications done in the cdo-server.xml file.
You can activate LDAP authentication in three different ways:
The server must be restarted to take into account the modifications done in the cdo-server.xml file.
These ways are excluding themselves.
To activate LDAP authentication, as exclusive authenticator, the following authenticator tag must be added to the repository configuration in cdo-server.xml.
<authenticator type="ldap" description="ldap-config.properties" />
ldap-config.properties
is a path to a properties file containing the LDAP authenticator configuration. This path may be relative to the CDO server configuration file or absolute.
As access control modes are exclusive, other modes must be commented in the
cdo-server.xml file:
<!-- <userManager type="auth" .../> -->
<!-- <securityManager type="collab" .../> -->
The LDAP authenticator’s configuration file is a properties file whose content could look like the following one:
ldap.url=ldap://127.0.0.1:10389
#ldap.url=ldaps://127.0.0.1:10389
ldap.dn.pattern=cn={user},ou=people,o=sevenSeas
ldap.filter=
ldap.tls.enabled=true
ldap.truststore.path=trusted.ks
ldap.truststore.passphrase=secret
where :
<ldap or ldaps>://<IP_address or domain_name>:<port>
When the LDAP authenticator is used in User Profile or Authenticated configurations, those properties property keys must be prefixed by the auth. and the auth.type=ldap is needed to activate the LDAP authentification.
Important !
Unlike the other two configuration ways (with «user profile server» and «authenticated server»), in the «exclusive authenticator configuration», the properties are not prefixed by auth.
If the LDAP certificate has been signed by an official Certificate Authority it is not required to set the trust store path as the JVM already trusts the CA.
If you need to generate a self-signed certificate or need to create a trust store from an existing certificate please refer to the following section.
An LDAP using Active Directory provides a field
sAMAccountName
that is usually used as a key (like the «cn» field). Users can be identified using this field associated with a domain name after an «@» as separator. This leads to this pattern: sAMAccountName@DomainName. As the user identifies himself by providing only his identifier, not the domain name, the corresponding pattern is: {user}@DomainName.
For instance, if the domain name is «MyCompanyDomain» then the LDAP pattern will be:
auth.ldap.dn.pattern={user}@MyCompanyDomain
Some LDAP does not support anonymous binding (if your LDAP server doesn’t even allow a query without authentication), then Capella would have to first authenticate itself against the LDAP server, and Capella does that by sending the «manager» DN and password. Using this specific connection, the user credentials (given by the user in the authentication popup) can be looked for in the LDAP tree.
This manager credentials needs to be provided in the properties file as it will not be asked to the user. These credentials are provided with the following properties:
The search for the user himself in the LDAP is provided with the following properties:
# ldap configuration
ldap.url=ldap://ldap.myCompany.com:389
ldap.user.search.base=dc=myCompany,dc=com
ldap.user.search.filter=(&(objectClass=account)(cn={user}))
# The manager credentials are useful for LDAP requiring authentication to run search filters
ldap.manager.dn=uid=manager,ou=People,dc=myCompany,dc=com
ldap.manager.password=DerfOcDoocs6
ldap.tls.enabled=false
# ldap configuration
ldap.url=ldap://ldap.myCompany.com:389
ldap.user.search.base=dc=myCompany,dc=com
ldap.user.search.filter=(&(objectClass=organizationalPerson)(name={user}))
# The manager credentials are useful for LDAP requiring authentication to run search filters
ldap.manager.dn=manager@myCompany.com
ldap.manager.password=managerPassword
ldap.tls.enabled=false
In case the certificate is self-signed or the CA used in your certificate is not managed by the jvm, you will need to generate a truststore and reference this truststore from the configuration file.
Follow the Export and TrustStore creation steps to create the trust store.
With a server set with an OpenID Connect Connect authentication, the user will be able to authenticate using the UI provided by the OpenID Connect Platform. Instead of having the default dialog where the user enters his login password, here the embedded T4C web server will display a popup web browser interacting with the OpenID Connect platform.
For instance, for a server set with MS Azure AD, here is the user experience when the user clicks on the «Test Connection» button of the Connection wizard. A web browser is displayed and present a Sign-in interface provided by MS Azure AD.
Then, the user follows the authentication process through the different web pages provided by the OpenID Connect platform depending on how it is configured.
Finally, the user will be presented a web page displaying if the authentication was successful or not. The user can close the browser and continue as usual. In this page, a «Logout» hyperlink allows to logout the current user. The end-user is redirected to the sign-in page and may sign-in with another login.
Technical views such as CDO views or Administration views still authenticate with basic login/password credentials. See Configure OpenID Connect authenticator to know how to configure this credentials. |
You can activate the OpenID Connect authentication:
Note: For the combination with both «user profile server» and «authenticated server», the user name to configure in Team For Capella must correspond to the attribute "Name" of the user in the OpenID Connect authentication platform.
The server must be restarted to take into account the modifications done in the cdo-server.xml file.
To activate the OpenID Connect authentication, as exclusive authenticator, the following authenticator tag must be added to the repository configuration in cdo-server.xml. Make sure the other tags are commented.
<authenticator type="openidconnect" description="openid-config.properties" />
openid-config.properties
is a path to a properties file containing the OpenID Connect authenticator configuration. This path may be relative to the CDO server configuration file or absolute.
As access control modes are exclusive, other modes must be commented in the
cdo-server.xml file:
<!-- <userManager type="auth" .../> -->
<!-- <securityManager type="collab" .../> -->
Finally, the OpenID Connect authentication requires a web server in order to securely communicate with the OpenID Connect platform. If the CDO server is configured with the OpenID Connect authentication mode, then it will require to activate the embedded web server for this secure communication.
<installation folder>/server/configuration/openid-config.properties
is the OpenID Connect authenticator’s configuration file. It is a properties file whose content could look like the following one:
openIDConnect.discoveryURL=https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
openIDConnect.tenant=organizations
openIDConnect.clientID=79bce8de-7542-4b90-bf18-XXXXXXXXXXXX
openIDConnect.technicalUsers.file.path=technicalUsers.properties
where :
As presented before, the OpenID Connect Authentication requires a web server in order to authenticate securely.
This is the same web server as the one providing the
web services (REST API) for repository management. See in the dedicated section how to install and activate this experimental feature.
To activate the OpenID Connect support, you need then to set the value of the
admin.server.jetty.auth.openidconnect.enabled
property to true in
<installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties
.
Note that if you do not have the Team for Capella server and all the Team for Capella clients installed on the same machine, you will need to configure the web server in https mode. Indeed, this is a security required from the OpenID Connect platform. So,
http
protocol with
localhost
. This is the default configuration.
https
.
To configure the admin server with
https
, do the following changes in
<installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties
# Jetty configuration
admin.server.jetty.https.enabled=true
# The next three line will be needed if the '${admin.server.jetty.https.enabled} option is set to true.'
admin.server.jetty.ssl.host=0.0.0.0
admin.server.jetty.ssl.port=8443
admin.server.jetty.ssl.keystore.path=${currentDir}/<keystoreFile>
admin.server.jetty.ssl.keystore.passphrase=<password>
On the OpenID Connect platform, there is one property that requires to be properly set: the
redirect URI. Indeed, the embedded web server expects that the redirect URI is the page
/auth/redirect
.
This means that the redirect URI must be set to
http://localhost:8080/auth/redirect
if the Team for Capella server is local to the Team for Capella client
https://<IP admin server>:8443/auth/redirect
if the Team for Capella server is installed on a different machine than the Team for Capella client.
If your OpenID Connect platform is MS Azure AD, here is a quick way to find how to configure the OpenID Connect authenticator in Team for Capella.
First, the
openIDConnect.discoveryURL
is provided by the OpenID Connect platform itself, not by your application. For MS Azure AD, this protocol is presented in the online documentation. On the same page, there is a list of the different values the
openIDConnect.tenant
.
For the
openIDConnect.clientID
, you will need to look for it in the application you created in MS Azure AD in order to use it for authentication from Team for Capella. From the MS Azure AD home page, you can select
App registration. Select your application for Team for Capella. From the overview, you can see the
Application ID.
Note that from this menu, you must set the redirect URI from the menu Authentication. In Platform configuration add a Web platform and set the redirect URI.
The last property,
openIDConnect.domainURL
, depends on the location/address of the web server and is not linked with the OpenID Connect configuration.
On your application, do not forget to add the users that will be able to authenticate to the application:
It is also recommended to create a conditional access policy (Security/Conditional Access) so you can set a timeout to the session once users are authenticated. You can also define how users are grant access (for instance with multi-factor authentication).
Note that to be able to add conditional access policies, you need to disable the security defaults.
Note that the following options must be activated because the authentication uses the implicit grant
The
Audit mode aims to configure the server so it keeps tracks of all versions of each object in the CDO Server database. It is required for comparing different versions of the model for example.
There are two different auditing configurations:
Audit and
Audit with ranges.
This Audit with ranges mode has been the default mode between Team for Capella 1.3.0 and Team for Capella 5.0.0.
The Audit mode is the default mode since Team for Capelle 5.1.0 to improve user-side performances (export, export with override, semantic browser refresh, ...)
The difference between the two modes is in the storage of lists: when the with ranges variant is used, the database stores only the delta between each versions of lists. This implies to load all preceding revisions of a list to compute a given state. But for some situations, it can slow the growth of the database. An analysis of the project can lead to a recommendation to switch to this mode.
When using the auditing modes, the size of the database might need to be monitored. If the database size grows bigger than 4 GB, the user might need to clear it if he encounters performance issues. That is to say, importing the models from the server, clearing the database and then importing the models back in the new database. Be aware that after doing this operation he will not be able to compare new commits against the commits done before the clearance. Some benchmarks have been done, after 10 000 commits modifying semantic and graphical elements this size have never been reached. In this context, modification and saving model times increase slightly compared to a server that does not have audit mode enabled. However both operations still feel smooth for the user.
Be aware that it is not possible to switch between «Audit», «Audit with ranges» or "non «Audit» modes on a CDO server that holds models. The switch has to be done on a empty CDO server database.
In order to disable the Audit mode you have to change cdo-server.xml to:
<property name="supportingAudits" value="true"/>
<mappingStrategy type="horizontalNonAuditing">
<mappingStrategy type="horizontalNonAuditing">
...
<!-- property name="withRanges" value="false"/ -->
</mappingStrategy>
In order to (re-)activate the Audit mode you have to change cdo-server.xml to:
<property name="supportingAudits" value="true"/>
<mappingStrategy type="horizontalAuditing">
<mappingStrategy type="horizontalAuditing">
...
<property name="withRanges" value="false"/>
</mappingStrategy>
In order to activate the Audit with ranges mode you have to change cdo-server.xml to:
<property name="supportingAudits" value="true"/>
<mappingStrategy type="horizontalAuditing">
<mappingStrategy type="horizontalAuditing">
...
<property name="withRanges" value="false"/>
</mappingStrategy>
It is possible to activate a WebSocket connection between the client and the CDO server.
Both client and server have to be configured accordingly.
On client side, users will have to use WS or WSS connection types regarding the configuration of the server.
The client side configuration will depend on the global deployment of the current server and the use of the WS and WSS connection types.
Then a user will have to use the following parameters to connect to the repository:
admin.server.jetty.port
or
admin.server.jetty.ssl.port
if HTTPS is enabled, or specific proxy port if Team for Capella is deployed behind a proxy)
When the REST Admin server runs in HTTPS mode, it will be configured with a certificate.
If this certificate is self-signed or untrusted, the following system properties can be added in the client capella.ini file in order to configure the security checks:
-Dfr.obeo.dsl.viewpoint.collab.https.jetty.ssl.context.trustall=true
-Dfr.obeo.dsl.viewpoint.collab.https.jetty.ssl.context.endpointIdentificationAlgorithm
-Dfr.obeo.dsl.viewpoint.collab.https.jetty.ssl.context.passphrase
-Dfr.obeo.dsl.viewpoint.collab.https.jetty.ssl.context.trust
-Dfr.obeo.dsl.viewpoint.collab.https.jetty.ssl.context.trust.type
-Dfr.obeo.dsl.viewpoint.collab.https.jetty.ssl.context.trust.manager.factory.algorithm
Those properties are used to configure the Jetty’s
org.eclipse.jetty.util.ssl.SslContextFactory).
Additional properties might be needed, see server configuration section.
When WebSocket transport is activated on the server, the importer and other tools must be configured accordingly to be successful.
The same configuration than the client needs to be done in the -vmargs section of the tools script (importer.bat, maintenance.bat, exporter.bat, ...).
The REST Admin Server and the CDO Server need to be configured to enabled the Net4J WebSocket-based transport:
admin.server.jetty.net4j.enabled=true
in
<TeamForCapella installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties allows to deploy the Net4J Websocket servlet.
ws
or
wss
.
<acceptor type="ws"/>
is the simplest and default WebSocket-based acceptor. Additional configurations are explained below.
<acceptor type="tcp" listenAddr="0.0.0.0" port="2036"/>
) or as an additional one.
The move from a Websocket-based transport to a SecuredWebsocket-based transport can be done through Jetty configuration by enabling HTTPS, or with the use of an HTTPS reverse proxy server (Nginx or Apache for example).
Here is a list of optional settings which will impact both server and clients configurations:
ws://1.2.3.4:8080/net4j/@default
to
ws://1.2.3.4:8080/net4j/@YourAcceptorName
<acceptor type="ws" listenAddr="YourAcceptorName" />
-Dfr.obeo.dsl.viewpoint.collab.net4j.ws.acceptor=YourAcceptorName
(same value than listenAddr attribute of the acceptor tag used on the server side).
/net4j
) to the path of your choice: from
ws://1.2.3.4:8080/net4j/
to
ws://1.2.3.4:8080/your/path/
admin.server.jetty.net4j.path=your/path
-Dfr.obeo.dsl.viewpoint.collab.net4j.ws.path=your/path
openapi/
):
admin.server.jetty.net4j.remove.public.constraint=true
-Dorg.eclipse.net4j.internal.ws.WSClientConnector.clientBasicAuth.login=sampleuser
-Dorg.eclipse.net4j.internal.ws.WSClientConnector.clientBasicAuth.password=samplepassword
It is possible to activate a SSL connection between the client and the CDO server.
Both client and server have to be configured accordingly.
On server side a keystore has to be set/ up and, on client side, a trust store containing the key store public key has to be set up. See chapter
Managing certificate to generate keystore and truststore.
Add the following lines in the client capella.ini file:
-Dorg.eclipse.net4j.tcp.ssl.passphrase=secret
-Dorg.eclipse.net4j.tcp.ssl.trust=file:///<trusted.ks absolute path>
When SSL is activated on the server, the importer and other tools must be configured accordingly to be successful.
Add the following lines in the script files (importer.bat, maintenance.bat, exporter.bat):
-Dorg.eclipse.net4j.tcp.ssl.passphrase=secret ^
-Dorg.eclipse.net4j.tcp.ssl.trust=file:///<trusted.ks absolute path> ^
In the
cdo-server.xml configuration file the acceptor has to be configured to accept SSL connections
<acceptor type="ssl" listenAddr="0.0.0.0" port="2036"/>
Set the acceptor type to ssl.
Add the following lines in the server ini file:
-Dorg.eclipse.net4j.tcp.ssl.passphrase=secret
-Dorg.eclipse.net4j.tcp.ssl.key=file:///<server.ks absolute path>
Keytool can be used to create and manage certificates and stores. This tool is provided with the JDK and its documentation is available here.
The keystore contains certificate information, private and public key. To generate it use the following command:
keytool -genkey -ext SAN=IP:<server IP> -keyalg "RSA" -dname o=sevenSeas -alias keystore_alias -keystore server.ks -storepass secret -validity 730 -keysize 4096
-ext: For example, <server IP> may be the LDAP server for SSL connection between CDO server and LDAP server or may be the CDO Server for SSL connection between client and CDO server.
-dname: optional. It initializes the metadata of your organization.
This step is optional and you may then proceed with
Export certificate from a keystore.
For that step, you have to give your certificate signature request(server.csr) to your certificate authority(CA) which, in return will provide a signed certificate(server.crt).
keytool -certreq -alias keystore_alias -file server.csr -keystore "server.ks"
The two steps below allow to import root certificate and intermediary certificate.
keytool -import -alias Root_CA -keystore server.ks -file Root_CA.cer
bc. keytool -import -alias Server_CA -keystore server.ks -file Server_CA.cer
Then, import the signed certicated into the server.ks keystore.
keytool -import -alias keystore-signed -keystore server.ks -file server.crt
To export a certificate from an existing keystore the following command can be used :
keytool -export -keystore server.ks -alias keystore_alias -file server.cer
This command asks for the store’s passphrase and then create a server.cer file containing the certificate previously created.
It is advised to not export the whole keystore on clients. Creating a truststore containing only the certificate and public key is recommended. This truststore is intended to be deployed on clients which need to connect to the server.
keytool -import -file server.cer -alias keystore_alias -keystore trusted.ks -storepass secret
This command creates a new truststore in file trusted.ks. This truststore contains the server’s public key, it can be copied on client machines and referenced via the truststore.path configuration key.
The truststore is protected with secret as a passphrase.
The Team For Capella server is composed of the CDO repositories server and an HTTP Jetty server.
By default, the Jetty admin server is automatically started with the CDO server on the port
8080.
The admin server is used :
You can find more information in the file <TeamForCapella installation folder>/server/configuration/fr.obeo.dsl.viewpoint.collab.server.admin/admin-server.properties : it contains all the admin server configuration information.
The REST Admin Server provides a whole set of services to manage the projects, the models and the users.
The documentation is available at the URL
http(s)://<admin server IP>:<admin server port>/doc
A
swagger documentation is available at the URL
http(s)://<admin server IP>:<admin server port>/openapi. It can be enabled or disabled with the
admin.server.jetty.servlets.admin.docandopenapi.enabled
property
The first time the server is launched, a default
«admin» user and its associated default token are created in the Eclipse secure-storage of the user that started the CDO server.
The «admin» credentials are stored in a dedicated node used by the server. The token is hashed and encrypted.
A
secret.txt file
, containing the token, is created in the same folder than
admin-server.properties file. It can be used in third party application to authenticated with the admin server.
Do not forget to remove this file as soon as you can.
Moreover, the admin credentials are also added in the secure storage for the application needs (importer, exporter, etc) in a dedicated node. The credentials are encrypted.
This way once the server has been started the first time, there is no additional step. The applications can automatically be used, being authenticated with the admin server with the «admin» user.
Nevertheless, it is possible to manage the user and the user token with the Credentials application
By default, the secure storage is created or retrieved from the home of the system user currently executing the application:
%USERPROFILE%\.eclipse\org.eclipse.equinox.security\secure_storage
C:\Users\someUser\.eclipse\org.eclipse.equinox.security\secure_storage
C:\Windows\System32\config\systemprofile\.eclipse\org.eclipse.equinox.security\secure_storage
~/.eclipse/org.eclipse.equinox.security/secure_storage
/home/someUser/.eclipse/org.eclipse.equinox.security/secure_storage
~/.eclipse/org.eclipse.equinox.security/secure_storage
/Users/someUser/.eclipse/org.eclipse.equinox.security/secure_storage
It is also possible to change the location of the secure storage with the use of the
-eclipse.keyring
program argument in both
TeamForCapella/server/server.ini
and
TeamForCapella/capella/capella.ini
. The secure storage must be shared between server-side client, tools and server in order to be able to use it from the Scheduler jobs. For example to use a fixed secure storage located in
TeamforCapella/.eclipse/secure_storage
:
-eclipse.keyring
../.eclipse/secure_storage
Installation process and details are described in the Installation Guide for Team for Capella.
Moreover, do not install any viewpoint except PROPERTIES KEY/VALUES-typed viewpoint. Ask to viewpoint providers whether their viewpoint is compatible with Team for Capella.
If the viewpoint is compatible with Team for Capella, deploy the viewpoint on every Team for Capella client and the importer used by server. Clean and export models again after a viewpoint installation.
This is the recommended configuration to work with several projects.
Hypothesis: the repository is added to a just installed version.
Add a new repository to the Team for Capella Server:
Note the 2 default repositories (content is collapsed in this screenshot),
Notes:
Add a new job to Team for Capella Scheduler (Jenkins) to manage the new repository:
Check the configuration is working: Start the Team for Capella Server using the "Server – Start"job (click on )and open the TeamForCapella\server\ folder
db and workspace folders should have been created:
Hypothesis: the server is added to a just installed version, by default it will only contain the default repository "repoCapella".
The main methods to close the server are the following:
To avoid database corruptions, the server must in no way be closed these ways: - Using the “Abort” button on the Server – Start job of the Scheduler, - Especially on Windows 2008 Server 64 bits platforms: - Closing the command prompt running the server (if any) by clicking on the Windows close button, |
To restart with a clean server or after a database corruption, it can be useful to reset the server:
Note that it is also possible to restore the database from the result artifacts of the Database – Backup job, refer to the Capella client Help Contents in chapter Team for Capella Guide > System Administrator Guide > Server Configuration > Reinitialize database.
The following line is used to configure the database (in cdo-server.xm):
To improve performances when exporting big models to the repository, change LOG=1 by LOG=0. When exports are done, return to the original value (LOG=1 is useful to avoid database corruptions when the server process is killed).
You have three ways to reinitialize data in a database.
The use of the Database – Restore job should be preferred but it is still possible to manually do the same operation.
This operation should be used to restore a database from the file generated by the Database – Backup job (this file has a pattern like: repoCapella.20151105.171109-sql.zip).
The database will be restored in exactly the same state as it was when the backup was performed:
!ENTRY com.thalesgroup.mde.melody.collab.server.repository.h2 1 0 2020-04-22 18:39:32.409 !MESSAGE Restore repoCapella processing starts. !ENTRY com.thalesgroup.mde.melody.collab.server.repository.h2 1 0 2020-04-22 18:39:33.977 !MESSAGE Restore repoCapella restored database from : C:\TeamForCapella\server\..\scheduler\jenkins_home\jobs\Database - Backup\builds\7\archive\repoCapella.20200422.182742-sql.zip !ENTRY com.thalesgroup.mde.melody.collab.server.repository.h2 1 0 2020-04-22 18:39:33.980 !MESSAGE Restore repoCapella processing ends. The file has been moved to C:\TeamForCapella\server\..\scheduler\jenkins_home\jobs\Database - Backup\builds\7\archive\repoCapella.20200422.182742-sql.zip.restored !ENTRY org.eclipse.emf.cdo.server.db 2 0 2020-04-22 18:39:35.537 !MESSAGE Detected crash of repository repoCapella !ENTRY org.eclipse.emf.cdo.server.db 1 0 2020-04-22 18:39:35.614 !MESSAGE Repaired crash of repository repoCapella: lastObjectID=OID248, nextLocalObjectID=OID9223372036854775807, lastBranchID=0, lastCommitTime=1 586 948 133 861, lastNonLocalCommitTime=1 586 948 133 86
The .zip backup file will be suffixed by .restored or .error if the restore failed. This behavior can be disabled with the use of -Dcollab.db.restore.rename.source.file=false .
NOTE: Restore process only supports textual script backup with the name that ends with –sql.zip.
If you want to remove restored locking sessions from the database, use the Durable Locks Management view (see the Server Administration part of this documentation).
This way gives more control on the restoration as you may delete the repository and the repository is restored project by project.
To restore projects in a repository:
Example:
server/server.exe -data C:/data/TeamForCapella/server/workspace
capella/importer.bat -data C:/data/TeamForCapella/server/importer-workspace
capella/command.bat -data C:/data/TeamForCapella/server/command-workspace
Example:
server/server.exe -configuration C:/data/TeamForCapella/server/configuration
tools/importer.bat -configuration C:/data/TeamForCapella/server/configuration
tools/command.bat -configuration C:/data/TeamForCapella/server/configuration
Example:
-vmargs -Dnet4j.config=C:/data/TeamForCapella/server/configuration/cdo-server.xml
Example:
Line 18 : <userManager type=«auth» description="C:/data/TeamForCapella/server/usermanager-config.properties" />
Example:
Line 37 : <dataSource uRL="jdbc:h2:C:/data/TeamForCapella/server/db/h2/capella;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0" (…)
Update scheduler/conf/context.xml to change the attribute Environment JENKINS_HOME with the path of the jenkins_home folder :
Example:
-vmargs -Dcollab.db.backupFolder=C:/data/TeamForCapella/server/db.backup
-Dcollab.db.restoreFolder=C:/data/TeamForCapella/server/db.restore
To directly externalize all previous file, you can edit server.ini file
Example: To externalize all files in the folder C:\data\TeamForCapella\server
1) Update server.ini
-console
-data
**C:/data/TeamForCapella/server/workspace **
-configuration
C:/data/TeamForCapella/server/configuration
-vmargs
-Dnet4j.config= C:/data/TeamForCapella/server /configuration
-Dcollab.db.backup=false
-Dcollab.db.restore=false
-Dcollab.db.backupFolder= C:/data/TeamForCapella/server /db.backup
-Dcollab.db.restoreFolder= C:/data/TeamForCapella/server /db.restore
-Dcollab.db.backupFolderMaxSize=1G
-Dcollab.db.backupFrequencyInSeconds=900
-Dosgi.requiredJavaVersion=11
-Xms128m
-Xmx2000m
-XX:PermSize=128m
See Server configuration section → Cdo-server.xml File
See Jenkins installation section → Change the Port Used by Jenkins.
See Team For Capella Web server section → Change the Port of the admin server
(deprecated telnet) Change telnet port
This is deprecated because by default telnet is not used anymore. It has been replaced by the admin server.
By convention we could use 12036 for a server that listens to the port 2036 (defined in cdo-server.xml), 12037 for the server that listens to 2037, 12038 for 2038 etc…
Ex: command.bat localhost 12036 capella_db backup
Ex: command.bat localhost 12036 close
Ex: importer.bat –consoleport 12036 –archivefolder
NOTE: If you have several jobs using the OSGI port value, you can create an environment variable to store it in a single place.
When very long text are written in Description or Documentation fields, an error of the following type can occur when saving a remote project or exporting a local project to the server:
[ERROR] org.h2.jdbc.JdbcSQLException: Value too long for column DESCRIPTION VARCHAR
To avoid this problem, change the file server/configuration/cdo-server.xml to use:
<dbAdapter name="h2-capella" /> instead of <dbAdapter name="h2" />
Fields description and documentation will be stored in CLOB instead of VARCHAR.
h2-capella is the default value in cdo-server.xml.
The Team for Capella client comes with two views useful to perform some administrative tasks: The Durable Locks Management view, and the User Management view. To access to these features, you must install the Team for Capella - Administration Views feature from the Team for Capella update site.
After restarting your T4C client, go to Preferences > General > Capabilities to enable the Administration Views capability.
Important: The durable locking is deactivated by default since Team For Capella 1.1.4 and 1.2.1.
The durable locking mechanism allows to configure the explicit locks manually taken by a user as persistent locks. If a user takes explicit locks and then terminates his connection to the remote model (by closing his shared project or exiting the Team for Capella client), his explicit locks are not released and he will retrieve them on the next connection to the repository.
The durable locking can be activated by a client by adding the following option in the plugin_customization.ini file:
fr.obeo.dsl.viewpoint.collab/PREF_ENABLE_DURABLE_LOCKING=true
If the plugin_customization.ini file is not present, you need
capella/
capella/capella.ini
: before -vmargs
, add: -pluginCustomization plugin_customization.ini
Note that the activation or deactivation of durable locking will have no effect on existing connection projects. The client have to remove the local connection project and to connect to the remote project again.
The following sections describe the case where the durable locking is activated.
Team for Capella provides the Durable Locks Management view to list existing locking sessions and delete them if needed.
|
When doing the first operation with this view, you will be asked to logon with the following dialog:
|
|
It is allowed to remove Locking Sessions only if the corresponding user is not connected. |
The Durable Locks Management view displays all locking sessions existing on the repository and the locks created by these locking sessions (if any).
A locking session is created whenever a team project is created on a client (Capella Connected Project). So if a user creates several team projects, he can have several locking sessions (as user1 in the screenshot above). Each locking session has a unique ID stored in the local .aird file.
Locks are owned by a locking session, so if the same user has two locking sessions (<=> 2 team projects) and he locks an element in the first locking sessions, this element will appear with a red lock in the second locking session.
As explained above, using the Durable Locks Management view, locking session can be removed (this action is available by all users but should be done by the administrator only). A locking session can be removed only if nobody is connected using it.
All locks hold by the locking session are removed with it.
If a user tries to connect to the repository using an existing connection project referencing a removed Locking Session ID, an error dialog is displayed (see below) and a new locking session is created. The ID of this new locking session will replace the old one in the local .aird file on the next save action.
Team for Capella provides the User Management view to manage users on the Team for Capella Server.
|
The Durable Locks Management view is useful only if the Team for Capella Server is configured to work with the access control " Identification". |
The view is shown.
|
When doing the first operation with this view, you will be asked to logon with the following dialog:
|
The repository might have some inconsistent data and might need to be maintained.
The Repository maintenance application will look for the following inconsistencies:
This link might be broken if the representation has been deleted or if the internal index of the Representation Descriptor list is incorrect. That can cause some troubles for the different users connected to the project.
The application aims to delete orphan Representation Descriptors and stale references in the repository (both graphical and semantic models).
Once done the application will close the server.
Note: This application requires that no user is connected to the repository.
There are two jobs available for maintenance in the Scheduler:
The application needs credentials to connect to the CDO server if the server has been started with authentication or user profile. Credentials can be provided using -repositoryCredentials parameter. Here is a list of arguments that can be passed to the application or using the job (in maintenance.bat or the job config):
Arguments | Description |
---|---|
-repositoryCredentials | Login and password can be provided using a credentials file.
To use this property file
|
-hostname | defines the team server hostname (default: localhost). |
-port | defines the team server port (default: 2036). |
-repoName | defines the team server repository name (default: repoCapella). |
-connectionType | The connection kind can be set to tcp or ssl (keep it in low case) (default: tcp) |
-consolePort | The port to access the osgi console (default: 2036). This value has to be equal to the console eclipse parameter of the server.ini. |
-diagnosticOnly | Allowed values are true or false. If true, only the diagnostic is done. The database will be unchanged. (default: false) |
-launchBackup | Allowed values are true or false. If true, the capella_db backup is done before any change is done on the database. (default: true) |
-archiveFolder | Indicates where the backup zip will be stored. |
-httpLogin | Backup and Maintenance are triggered by an Http request. This argument allows to give a login to identify with on the Jetty server. |
-httpPassword | Backup and Maintenance are triggered by an Http request. This argument allows to give a password to authenticate with on the Jetty server. |
-httpPort | Backup and Maintenance are triggered by an Http request. This argument allows to give a port to communicate with on the Jetty server. |
-httpsConnection | Backup and Maintenance are triggered by an Http request. This boolean argument specifies if the connection should be Https or Http. |
An administration feature through WebServices is available for the Team for Capella Server: it brings users and repositories management capabilities through REST API and exposes an OpenAPI description:
Refer to documentation available in the folder server/dynamic to discover how to install and enable it.
Several modes of access control can be used for each repository on the server:
|
When switching between different access control modes, the server must be restarted.Otherwise, the configuration update will not be taken into account. |
In Team for Capella, when using the User Profiles feature, user names and access rights are stored in the repository (i.e. in the database). Note, that, when passwords are stored in the user profiles model (when LDAP is not used), they are not encrypted. That’s why the user names management part of this feature must be considered as a simple identification feature.
|
If the server has been started with user profile, the Importer needs to have write access to the whole repository (including the user profiles model). See Resource permission pattern examples section. If this recommendation is not followed, the Importer might not be able to correctly prepare the model (proxies and dangling references cleaning, ...). This may lead to a failed import. |
To use the User Profiles feature in T4C, you first need to install the associated Team for Capella User Profiles UI feature from the Team for Capella update site.
After restarting your T4C client, go to Preferences > General > Capabilities to enable the User Profiles capability.
You can connect to the user profiles model of a repository thanks to the dedicated wizard:
|
The accounts created by default in the user profiles model are those defined in the administrators file. Refer to Server Configuration/User Profile Configuration |
To be able to change the user profiles model, the Administrator account should be used.
Here the default user profiles model with its table opened:
By default, the userprofile resource is hidden. To make it appear under the userprofile project, the EMF Resources filter must be deactivated via the Customize View... dialog.
When the server is configured with the User Profiles functionality, the following roles are automatically created:
These defaults roles are required :
Note that as user created as administrators (in the administrator properties file as presented in the previous part) have full access and do not need to be assigned to any role. Trying to assign roles to administrators will be prevented and a dialog will appear explaining that the administrators already have full access.
If the user has only a read only right on the semantic element, he can not create/clone/move a representation on it. If trying, a pop up will be displayed telling that it failed. More information in Locks and Updates on Diagrams
To add a user:
And complete login information
Use the dedicated tool to add a role:
A name can be given to the created role using the Properties view (attribute ID).
Once the new role is created, right click on it to add resource permission.
Complete the textbox with path of authorized resource
|
|
Finally, associate users to a role in the Properties View of the role:
|
|
Inaccessible elements for a user have a gray padlock.
Since only resource permissions are currently available, to define fine grain permissions on a model, it has to be cut into several fragments.
Here is an example project:
Write access to the whole repository (including the user profiles model) |
.* or /.* |
Write access to the whole TestModel project |
/TestModel/.* |
Write access to OA fragments of TestModel |
/TestModel/fragments/OA.* or /TestModel/.*OA.* |
Write access to OA and SA fragments of TestModel |
/TestModel/fragments/(OA|SA).* or /TestModel/.*(OA|SA).* |
Write access to the semantic part of TestModel |
/TestModel/.*(capella|melodyfragment) |
Write access to the representation part of TestModel (diagrams and tables) |
/TestModel/.*(aird|airdfragment|srm) |
Write access to TestModel but not its fragments |
/TestModel/.*(aird|capella|srm) or /TestModel/[^/]* |
|
When dealing with aird and airfragment files do not forget to give the same rights to srm files (files used to store the representations data when the lazy loading is enabled, the lazy loading is enabled by default). Note that the project name in a resource permission pattern must be the name coming from the server repository. This is not necessarily the same name than the locally imported project (e.g. if TestModel.team is the name of the locally imported project, putting TestModel.team in the permission pattern will not work). |
At startup, there is only one superuser: Administrator.
A basic user can be promoted to super user. To do that:
You have the possibility to import a user profiles model; this is the same mechanism as for a Capella project.
In Team for Capella, you need to enable the Sirius Collaborative Mode – Default UI > User Profiles capability to access the import/export User Profiles functionalities.
Then, you need to create a general project which will contain the imported User Profile model.
Import User Profiles model:
Enter a local URI starting with platform:/resource/
Example:
platform:/resource/LocalUserProfilesProject/users.userprofile
To export, we can create a general project (or reuse the general project created earlier) and put a User Profile model into it, then right click on the User Profile model and choose Export:
|
How to reuse the user profiles model It is recommended that you backup your user profiles model (Refer to Server Administration/Team for Capella Scheduler/Import user profiles model).
|
User login/password can be modified via the Update User Information contextual menu. This contextual menu can be accessed by right-clicking on the column corresponding to the user being modified. Note that this action is done only by right-clicking on one of the cells of the column, clicking elsewhere (e.g. on the column title) should be avoided.
Once the User Update dialog appears, we can modify either user login or password.
Notes:
If the administrator password has been forgotten, it will no more be possible to change the user profiles model or export a model to the server.
To give a new password to the Administrator account:
Please notice the following known issues:
Re-connection to a user profiles model raises error |
Team for Capella is a collaborative MBSE tool and methodology that relies on the Sirius framework. Both provides extension points and APIs allowing developers to customize and extend Team for Capella. Some of these developments are available as open source add-ons. This documentation will reference some pointers to get started:
To avoid performance issues, some guidelines must be followed.
It is recommended to generate viewpoint with CDO Native.
Please refer to the Capella Studio Documentation to see how to generate this part of the Viewpoint.
Viewpoints (as described in Capella Guide > User Manual > Overview > Capella Ecosystem) must be generated for CDO.
Nevertheless, if you decide to use the Legacy mode, you can enable it by setting the non UI preference CDOSiriusPreferenceKeys.PREF_SUPPORT_LEGACY_MODE to true, even it is not a recommended nor supported mode in Team for Capella. For more information refer to the Activate Legacy mode support.
Repeated calls to the following methods must be avoided:
For remote models, these methods do not simply access to a reference as the target objects are not shared, then it is recommended to use local variable instead of multiple occurences of those calls.
Repeated calls to org.eclipse.sirius.tools.api.interpreter.InterpreterRegistry.getInterpreter(object) must be avoided. Note that the IInterpreter is the same for the whole ResourceSet and corresponding Sirius Session. If you already have this Session, you can use org.eclipse.sirius.business.api.session.Session.getInterpreter().
OBEO S.A.S. is a French company, headquartered at 7 Boulevard Ampere, BP 20773, 44470 CARQUEFOU, FRANCE, and registered with the Business Number: 485 129 860 RCS Nantes.
THALES GLOBAL SERVICES S.A.S. is a French company, headquartered at 19-21 avenue Morane Saulnier, 78 140 Velizy Villacoublay, FRANCE, and registered with the Business Number 424 704 963 R.C.S. VERSAILLES.
The SOFTWARE is the TEAM FOR CAPELLA software.
The USER is the recipient of the SOFTWARE license (the licensee).
The company THALES GLOBAL SERVICES possess intellectual property rights over the SOFTWARE and OBEO hereby confirms that it holds a concession for distribution and technical support & maintenance rights for said SOFTWARE.
The user license for the SOFTWARE does not result in any transfer of the ownership of property rights, and entails solely the user rights stipulated herein.
The USER receives a non-exclusive and non-transferable right to use the SOFTWARE in a form that runs on one machine, provided payment of the agreed price is received in accordance with the terms of the agreement.
The USER undertakes not to directly or indirectly infringe the rights held by THALES GLOBAL SERVICES and OBEO. The USER undertakes to take all measures necessary relative to its authorised users to ensure the confidentiality and respect of property rights over said SOFTWARE. The USER undertakes in particular to ensure that its personnel do not keep any documentation or any copies or reproductions of the SOFTWARE.
The SOFTWARE will be used solely for the USER's internal requirements and the requirements of users authorised by the USER, up to the maximum number of authorised users, and for a perpetual or limited duration of use as described and approved by both parties in the Technical and Financial Proposal issued by OBEO or in the USER purchase order. Third parties outside the USER's company are excluded from the license.
The USER must ensure that only authorised users have access to the SOFTWARE. Any additional license requested by the USER will incur an additional charge based on the current schedule of charges.
The USER will refrain from assigning, leasing, supplying, distributing or lending the SOFTWARE, and from granting sub-licenses or any other rights, without prior written agreement from OBEO.
More generally, the USER undertakes not to disclose all or part of the SOFTWARE to any third party by electronic methods, over the internet, or by any other means.
The USER undertakes not to make any amendment, modification, correction, arrangement, adaptation, transcription, combination or translation of all or part of the SOFTWARE without express, prior, written permission from OBEO, for which OBEO itself will first obtain express permission from THALES GLOBAL SERVICES.
The USER is permitted to make and keep a single copy of the SOFTWARE for backup and archiving purposes and for use in recovery in the event of an incident.
The USER is not permitted to reverse engineer, decompile or translate the SOFTWARE.
The USER acquires no rights over the SOFTWARE source code, and OBEO alone reserves the right to make modifications, under supervision from THALES GLOBAL SERVICES, in order to correct any faults or development enhancements to the SOFTWARE.
Only the owner of the intellectual property rights is in fact permitted to modify the SOFTWARE, change versions, amend the functionality, specifications, options and all other features, without providing notice to the USER and without the USER being able to derive any advantage whatsoever therefrom.
In the event the USER wishes to obtain indispensable information for the implementation of interoperability between the SOFTWARE and some other software developed independently by the USER, for a use that is consistent with the SOFTWARE's intended purpose, the USER undertakes to consult OBEO before starting any work to this end, and OBEO can provide the USER with the information needed to provide this interoperability, which OBEO itself obtains from THALES GLOBAL SERVICES. The parties will negotiate a reasonable fee in exchange for this service.
If THALES GLOBAL SERVICES is unable to provide the information required to provide interoperability of the SOFTWARE, OBEO will be entitled to authorise the USER to decompile or reproduce the SOFTWARE, strictly within the stipulations of Article L.122-6-1 IV of the French Intellectual Property Code.
Pursuant to Article L.122-6-1 III of the French Intellectual Property Code, the USER is permitted to observe, study or test the functioning or security of the SOFTWARE, in order to determine the ideas and principles which underlie any element of the SOFTWARE if this is done while loading, displaying, running, transmitting or storing the SOFTWARE as the USER is permitted to do by virtue hereof.
THALES GLOBAL SERVICES must be informed of any activity of this kind performed pursuant hereto.
The USER will refrain from reproducing the documentation about this SOFTWARE without prior written permission from OBEO.
Any unauthorised use, or use not compliant with these conditions of use of the SOFTWARE, will result in termination of the present user license as of right one month after the sending of formal notice that is not acted upon, and without prejudice to any legal proceedings seeking remedy for any subsequent loss or harm suffered by OBEO and the holder of the intellectual property rights.
The USER acknowledges that the software may contain Open Source Software which may be subject to separate license terms. The relevant license terms are provided by OBEO to the USER either as part of the SOFTWARE or as part of the documentation.
OBEO may grant the USER an evaluation license solely for evaluation, testing and demonstration purposes, enabling the USER to evaluate, test and use the SOFTWARE for a set period with a maximum of 2 months, in order to confirm its suitability.
The USER is then allowed to download or install an evaluation version of the SOFTWARE.
The USER will consequently refrain from using the SOFTWARE for any purpose inconsistent with those for which the evaluation license is granted. For instance, the USER will not use or deploy the SOFTWARE in any production environment.
The USER in particular may not decompile, copy or reproduce in any way whatsoever the SOFTWARE made available to the USER.
At the end of the contractually-stipulated evaluation period, the USER undertakes either to acquire a full user license for the SOFTWARE from OBEO, or to destroy the SOFTWARE and stop using it.
OBEO does not provide any support or maintenance service relative to evaluation licenses.
The USER is responsible for the proper operation of the hardware used to run the SOFTWARE and for the compliance of its environment with OBEO's specifications.
In the event of a permanent or temporary change in the system designated by the USER, the USER must have ensured beforehand that the future designated system is compatible with the SOFTWARE, and notify OBEO of the change. OBEO may refuse to ratify the change of system. If the USER fails to comply with such a refusal, OBEO is entitled to terminate this agreement.
In all cases where the designated system is changed, the USER undertakes to immediately destroy all files comprising the copy of the SOFTWARE installed on the previous designated system.
It is recommended that the USER take out a support & maintenance contract, its terms and renewal conditions are set forth in the Technical and Financial Proposal issued by OBEO.
OBEO warrant the software conforms to its documentation, however, the USER acknowledges and agrees that the SOFTWARE is not guaranteed to run either error- free or without interruption and that the USER is under the exclusive control and responsibility for the usage of any inputted or generated outputted data (including its accuracy and adequacy). While the warranty or support & maintenance contract is active, Obeo is committed to remedying at its expense any blocker issue detected by the USER under the condition it can be reproduced on a non-modified software executed within the technical requirements set forth in the documentation. The USER acknowledges and commits to execute the process set forth in the Technical and Financial Proposal to create such requests.
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. The USER is solely responsible for determining the appropriateness of using the SOFTWARE and assumes all risks associated with its exercise of rights under this agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. OBEO does not guarantee against the risks inherent in using the SOFTWARE including but not limited to service interruption, loss of connection, data loss, system crashes, poor performance or deterioration in performance. EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER OBEO AND/OR ITS THIRD PARTY SUPPLIERS SHALL HAVE ANY LIABILITY FOR ANY INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
The USER is responsible for taking backups before any work is carried out on its hardware or software by OBEO.
EXCEPT FOR BREACH OF CONFIDENTIALITY, INSURED CLAIMS, AND THE PARTIES' RESPECTIVE EXPRESS INDEMNITY OBLIGATIONS, THE TOTAL LIABILITY OF EITHER PARTY TO THE OTHER PARTY FOR ALL DAMAGES, LOSSES, AND CAUSES OF ACTION (WHETHER IN CONTRACT, TORT (INCLUDING NEGLIGENCE), OR OTHERWISE) SHALL NOT EXCEED 10% OF THE AGGREGATE FEES PAID HEREUNDERPURCHASE PRICE. THE LIMITATIONS PROVIDED IN THIS SECTION SHALL APPLY EVEN IF ANY OTHER REMEDIES FAIL OF THEIR ESSENTIAL PURPOSE.
OBEO will defend actions brought against the USER at its own expenses provided that it is based upon a claim that the SOFTWARE infringes a United States copyright or patent, or violates any third party proprietary right or trade secret. OBEO will pay all costs and damages finally awarded against the USER, provided that OBEO is given prompt written notice by the USER of such claim and is given all available information, reasonable assistance, and sole authority to defend and settle the claim.
OBEO will not have any obligation under the "VI Indemnity" section and will have no liability whatsoever if the claim is (1) based upon the use of the SOFTWARE in combination with other software not provided by OBEO if such claim would not exist except for such combined use, (2) based upon a version of the SOFTWARE modified by the User or any other third party if the claim relates to the modified parts, (3) based upon the use of the SOFTWARE by the USER in a manner not authorized or not set forth in this agreement.
OBEO, at its own choice and expenses, will get the right to continue using the SOFTWARE for the USER, or will modify or replace the SOFTWARE so it becomes non-infringing; or, if such remedies are not reasonably available, OBEO will accept the return of the SOFTWARE and this agreement will terminate.
OBEO will have no liability on any expense made by the USER related to any action except prior written consent from OBEO. OBEO will have no liability for infringement of the intellectual property rights of a third party except as expressly provided in this "VI Indemnity" section.
The USER agrees that national or international foreign trade law and regulations may prevent OBEO from fulfilling its obligations under this agreement, including embargoes or any other sanctions.
The USER and OBEO will strictly comply with applicable export and import laws and regulations, including those of the United States, and will reasonably cooperate with the other by providing all information to the other, as needed for compliance.
Except when otherwise required by law or regulation, the USER shall not export, re-export or transfer, whether directly or indirectly, the SOFTWARE and material delivered pursuant to this agreement without first (1) at the USER sole expense, complying with the applicable export laws and the import laws of the country in which the SOFTWARE is to be used and (2) the express written consent of OBEO and (3) a validated export license is obtained applicable authority where required.
This SOFTWARE contains publicly available encryption source code classified ECCN 5D002 and use encryption technologies, notably SSL/TLS to protect customer data in transit. The country in which you are currently may have restrictions on the import, possession, and use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check the country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted.
The provisions of this "VII Export" section will survive the expiration or termination of this agreement for any reason.
This SOFTWARE is a commercial product that has been developed exclusively at private expense. If this SOFTWARE is acquired directly or indirectly on behalf of a unit or agency of the United States Government under the terms of (1) a United States Department of Defence contract, then pursuant to DOD FAR Supplement 227.7202-3(a), the United States Government shall only have the rights set forth in this license agreement; or (2) a civilian agency contract, then use, reproduction, or disclosure is subject to the restrictions set forth in FAR clause 27.405-3, entitled Commercial computer software, and any restrictions in the agency's FAR supplement and any successor regulations thereto, and the restrictions set forth in this license agreement.
This agreement shall come into force on the date of the order of the SOFTWARE license by the USER and will be in effect until the expiration of the license, unless terminated as set forth in this agreement. Upon termination of the agreement or expiration of the license, the USER shall immediately destroy or return all copies of the terminated or expired SOFTWARE.
During the term of this agreement and one year after its termination, the USER shall maintain accurate information on to the use of the SOFTWARE. Unless strictly prohibited by Government policy OBEO shall have the right, once per year, at its own expense and under reasonable conditions of time and place in USER's premises, to audit and copy these records and to verify the USER compliance with the terms of this agreement.
The USER acknowledges to have read this agreement, understand it and agree to be bound by its terms and conditions. The USER further agree that this agreement are the complete and exclusive statement of the agreement between the parties regarding the SOFTWARE, which supersedes all proposals or prior agreements, oral or written, and all other communications between the parties relating to the subject matter of this agreement.
If any term or provision of this agreement is determined to be invalid or unenforceable for any reason, it shall be adjusted rather than voided, is possible, to achieve the intent of the parties to extent possible. In any event, all other terms and provisions shall be deemed valid and enforceable to the maximum extent possible.
Neither party shall be liable for any loss, damage, or penalty arising from delay due to causes beyond its reasonable control.
Notice to be given or submitted by the USER to OBEO shall be in writing and directed to OBEO headquarters.
This agreement may be modified only by a written instrument duly executed by an authorized representative of OBEO and the USER. OBEO and the USER agrees that any terms and conditions of any purchase order or other instrument issued by the USER in connection with this agreement that are in addition to or inconsistent with the terms and conditions of this agreement shall be of no force of effect.
This agreement may not be assigned or transferred by the USER, in whole or in part, either voluntarily or by operation of law, without the prior written consent of OBEO.
The failure of a party to enforce any provision of this agreement shall not constitute a waiver of such provision or the right of such party to enforce such provision or any other provision.
This agreement will be governed by and construed in accordance with the substantive laws of FRANCE, without giving effect to any choice-of-law rules that may require the application of the laws of another jurisdiction.