Developing new Process Definitions
If your refresh process needs functionality that is not covered by the standard process definitions provided with the module, there are several ways to resolve this:
- contact Redwood Software. If your request covers functionality that is not unique to your situation, then chances are high that Redwood already has a solution, or can help you to develop one
- use an SAP ABAP report with a (temporary) variant
- call a RFC-enabled function module from within a process definition
- record a BDC session and convert it to a process definition
- use data export/import
- change data through the database access
- use an OS process, for example bash script
SAP ABAP report with a (temporary) variant
This is the preferred way of automating refresh steps not covered by the module.
Please see Using Temporary Variants for the exact procedure.
RFC-enabled function module
Redwood System Copy module allows calling RFC-enabled function modules from process definitions of type RedwoodScript.
Before such a function module can be called from System Copy module its interface description needs to be exported via the process definition SAP_ExportRFCFunctionModuleInterface . The output of this process needs to be sent to Redwood where a RedwoodScript code is generated. This code can be imported into System Copy and called from RedwoodScript.
BDC session recording
Some transactions can be recorded as BDC sessions. These recordings can be exported to a file, which needs to be sent to Redwood where a RedwoodScript code is generated. This code can be imported into System Copy and called from RedwoodScript.
Data export/import
Often post-processing involves steps whose sole purpose is to restore data that is overwritten when the source database is laid over the target database. In this case the most simple and efficient way of implementing these steps is to export the data off the corresponding SAP tables and re-importing it back during post-processing.
The SAP tables to export / import are defined in System Copy tables (Redwood System Copy Tool UI Scripting -> Tables).
The columns of this table follow the CTS standard:
Key
: arbitrary unique valuePGMID
:SAP PGMID
; useR3TR
OBJECT_TYPE
: SAP object type; useTABU
OBJECT_NAME
: SAP object name; the name of the SAP tableTABLE_KEYS
: use *COMMENT
: comment the purpose of the table
The existing SCP_CTS_* tables represent default list of tables per component and these tables cannot be edited. To override a row from the predefined tables, a record must be added to table SAPSystemCopy_TransportTables. (If this table does not exist, it can be created from the definition with the same name.)
Table SAPSystemCopy_TransportTables has the following columns:
Key
- matching key of existing or new components defined withoutEXPORT_
, followed by sequence nr eg.USER_1
TableName
- table name you want to add or exclude. The text<DEFAULT>
can also be used here to exclude all values from default SCP_CTS table, if predefined component is edited. You can also specify a valid SCP_CTS table by adding<TABLE NAME>
eg.<SCP_CTS_PROFILE>
.Option
- I to include or E to exclude (values from matching default table are automatically included)Scope
- GLOBAL: valid for all configurations or actual flavour name or actual landscape key or actual configuration name
In order to add own data to this process, perform the following steps:
- Add a new parameter
EXPORT_<key>
to your configuration on the export screen, where<key>
is a name to identify the data you want to preserve. For instance if you want to preserve you own custom Z-Tables, the<key>
might beZTABLES
, so the name of the parameter would then beEXPORT_ZTABLES
. The value can be one of the following: ALL - export data in all clients,<default>
- export data in the default client configured in the System Copy module for the target SAP system, comma-separated list of client numbers (eg. 100,200) - preserve data only in the given clients - Open table SAPSystemCopy_TransportTables.
- Add new row.
- Set field Key to
<key>
followed by sequence number, eg.ZTABLES_1
. - Set TableName to
MYZTABLE
. - Set Option to I.
- Set Scope to GLOBAL
Perform these steps until you have listed all your custom tables.
When submitting verify from the UI the export parameters will be checked for completeness. The verify output will also display a warning if any duplicate tables where defined across the components.
OS Processes
OS processes are normally split process definitions:
- an OS specific process definition which does the actual work
- a wrapper that properly fills all parameters and starts the correct OS process
First start with the wrapper process definition:
- Create a process definition
Copy_Files_XYZ
of type RedwoodScript - Create a parameter CONFIG_NAME of type String with direction In. At runtime this parameter will hold the name of the refresh configuration.
- Create any other parameter that you want to pass to the OS script
- Create the following source code:
import com.redwood.scheduler.custom.app.scp.*;
import com.redwood.scheduler.custom.app.scp.util.*;
{
/* set the SCP partition */
SchedulerSessionHelper.setDefaultPartition(jcsSession, jcsJob);
/* create a new OS process as child process of this process */
Job j = JobHelper.getOsJob(jcsSession, CONFIG_NAME, jcsJob, true);
/* set the value of the DIR parameter on the child process */
JobHelper.setParameter(j, "DIR",
Configuration.getConfiguration(jcsSession,
CONFIG_NAME,
"REFRESH_DATA_DIRECTORY"));
/* submit the child process
automatic parameters will be populated here too
this process (the parent) will be marked to wait for the child process */
JobHelper.submitAndWait(jcsSession, jcsJobContext);
/* do more stuff here; child process runs independently */
}
The signature of the method JobHelper.getOsJob looks as follows:
/**
* Get an OS depended process for the given process.
* The OS depended process is determined based on the
* configuration and role of the given process.
*
* @param session
* Scheduler session to use
* @param configName
* Name of the configuration
* @param jcsJob
* OS independent process
* @param isTarget
* Role of the process: true if the process needs to run on the
* target system and false if it needs to run on the
* source system
* @return OS depended process as a child process
*/
public static Job getOsJob(SchedulerSession session,
String configName,
Job jcsJob,
boolean isTarget);
Next create the OS specific process definition (assuming Unix):
- Create a process definition
Copy_Files_XYZ_UNIX
of type BASH (or KSH) - Create at least the following parameters
- JCS_USER of type String with direction In
- Create any other parameters that you want to pass to the OS script. Please make sure that their names match exactly the corresponding names of the wrapper process definition
- Create the following source code:
id
hostname
cd pwd
. ~/.profile
echo
if [ "x${SAPSYSTEMNAME}" = "x" ]
then
echo "**********"
echo "ERROR: SAPSYSTEMNAME environment variable is not set. Check your environment"
exit 2
fi
All parameters of the process definition are available in the script. Please refer to the process definition topic for more details regarding creating process definitions.
The process definition Copy_Files_XYZ
can now be used in the refresh chain. It will automatically call the Copy_Files _XYZ_UNIX_ORA
process definition with the appropriate parameters.
All input parameters of the parent process (for example Copy_Files_XYZ
) are automatically passed over to the child process (for example Copy_Files_XYZ_UNIX_ORA
).
All output parameters of the child process are automatically propagated to the parent process if they are available there.
Automatically populated input parameters of the child processes are listed in the respective chapter.
DB processes
Direct database access is an efficient solution when none of the other ways of accessing / manipulating data (ABAP report, RFC function module, etc) is possible.
Direct database access is normally achieved by creating two process definitions:
- a database specific process definition which does the actual database manipulation
- a wrapper that properly fills all parameters and starts the correct database process
First start with the wrapper process definition:
- Create a process definition
Change_SAP_Objects_XYZ
of type RedwoodScript - Create a parameter
CONFIG_NAME
of type String with directionIn
. At runtime this parameter will hold the name of the refresh configuration. - Create any other parameter that you want to pass to the database
- Create the source code as follows:
import com.redwood.scheduler.custom.app.scp.*;
import com.redwood.scheduler.custom.app.scp.util.*;
{
/* set the SCP partition */
SchedulerSessionHelper.setDefaultPartition(jcsSession, jcsJob);
/* create a new OS process as child process of this process */
Job j = JobHelper.getOsDbJob(jcsSession, CONFIG_NAME, jcsJob, true);
/* set the value of the DIR parameter on the child process */
JobHelper.setParameter(j, "DIR",
Configuration.getConfiguration(jcsSession,
CONFIG_NAME,
"REFRESH_DATA_DIRECTORY"));
/* submit the child process
automatic parameters will be populated here too
this process (the parent) will be marked to wait for the child process */
JobHelper.submitAndWait(jcsSession, jcsJobContext);
/* do more stuff here; child process runs independently */
}
The signature of the method JobHelper.getOsDbJob looks as follows:
* Get a DB dependent process for a given process. The DB depended process
* is determined based on the configuration and role of
* the given process.
*
* @param session
* Scheduler session to use
* @param configName
* Name of the configuration
* @param jcsJob
* Parent process
* @param isTarget
* Role of the process: true if the process needs to run on the
* target system and false if it needs to run on the
* source system
* @return DB depended process as a child process
*/
public static Job getOsDbJob(SchedulerSession session,
String configName,
Job jcsJob,
boolean isTarget)
Next create the DB specific process definition (assuming Oracle on Unix):
- Create a process definition
Change_SAP_Objects_XYZ_UNIX_ORA
of type BASH (or KSH) - Create at least the two following parameters
- JCS_USER of type String with direction In
- DB_OWNER of type String with direction In
- Create any other parameters that you want to pass to the database. Please make sure that their names match exactly the corresponding names of the wrapper process definition
- Create the source code as follows:
id
cd
. ~/.profile
pwd
echo "***************"
sqlplus "/as sysdba" <<EOF
whenever sqlerror exit sql.sqlcode
update ${DB_OWNER}.SXNODES set ACTIVE = 'X';
/
commit;
EOD
ERRORCODE=$?
if [ ${ERRORCODE} != 0 ]
then
echo "***************"
echo "ERROR: Failed to activate SAPconnect nodes. ErrorCode: ${ERRORCODE}"
echo ${ERRORCODE}
fi
All parameters of the process definition are available in the script. Please refer to the process definition topic for more details regarding creating process definitions.
The process definition Change_SAP_Objects_XYZ
can now be used in the refresh chain. It will automatically call the Change_SAP_Objects_XYZ_UNIX_ORA
process definition with the appropriate parameters.
All parameters of the parent process (for example Change_SAP_Objects_XYZ
) are automatically passed over to the child process (for example Change_SAP_Objects_XYZ_UNIX_ORA
).
All output parameters of the child process are automatically propagated to the parent process if they are available there.
The same input parameters of the child get automatically populated if available as for the OS processes.
Automatic process parameters
The following table lists all process parameters which are automatically populated for DB and OS processes.
JCS_USER | The user ID of the OS user which will run the child process on the OS level. Typically this would be the SAP <SID>adm user for processes that run on CI and DB administrator user (for example ora<SID> ) for processes that run on the DB host |
---|---|
SAP_SID SOURCE_SAP_SID TARGET_SAP_SID | SID of the SAP system |
DB_SID SOURCE_DB_SID TARGET_DB_SID | SID of the database |
DB_OWNER SOURCE_DB_OWNER TARGET_DB_OWNER | SAP DB schema name |
SAPADM_USER SOURCE_SAPADM_USER TARGET_SAPADM_USER | ID of the SAP administrator user, i.e. <SID>adm user |
DBADM_USER SOURCE_DBADM_USER TARGET_DBADM_USER | ID of the DB administrator user, for example ora<SID> user for Oracle |
DIR_TRANS SOURCE_DIR_TRANS TARGET_DIR_TRANS | SAP transport directory, eg. /usr/sap/trans |
DIR_PROFILE SOURCE_DIR_PROFILE TARGET_DIR_PROFILE | SAP profile directory, eg. /usr/sap/<SID>/SYS/profile |
SAP_INSTANCE_NAME SOURCE_SAP_INSTANCE_NAME TARGET_SAP_INSTANCE_NAME | SAP instance name, eg. DVEBMSG<No> |
SAP_INSTANCENR SOURCE_SAP_INSTANCENR TARGET_SAP_INSTANCENR | SAP instance number, eg. 00 or 25 |
SAPGLOBALHOST SOURCE_SAPGLOBALHOST TARGET_SAPGLOBALHOST | Host name of the SAP central instance |
REFRESH_DATA_DIRECTORY | Refresh data directory |
See Also
sscpTopic