ADempiere Testing with Sahi

From ADempiere
Revision as of 18:58, 9 April 2014 by MJMcKay (Talk) (Small correction.)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This Wiki is read-only for reference purposes to avoid broken links.

What is Sahi

Sahi (designed by Tyto Software Pvt.[1] in Bangalore, India) is an automated testing tool for web applications. There is a pro version sold by Tyto and an open source version which has fewer features but follows the pro version. The ADempiere test suite uses the open source version. For more information, visit the [Sahi web site].

How to install Sahi

The install is simple: From the Sahi web site find the link to the download for the Sahi Open Source product on sourceforge[2]). Download the latest version of the installation jar file.

For installation instructions - see the older documentation.

Configuration

There are a few items to configure to get the ADempiere Test Suite set up properly:

  1. Configure Sahi to run on your system and with your browsers. You may also find it helpful to configure the timings of the tests in the sahi.properties file.
  2. Add the Adempiere Test Suite to your development environment. This can be found on SourceForge at https://sourceforge.net/p/adempiere/adempiere-test-suite. Clone the read only repository using mercurial with the following command. To get read-write access, you need to be a developer and be logged-in to SourceForge. For more information on becoming a developer, see Becoming a Developer.
 hg clone http://hg.code.sf.net/p/adempiere/adempiere-test-suite adempiere-adempiere-test-suite

  1. Add the cloned project to your development environment as a project.
  2. Add pointers to your development environment location in the sahi user properties.
Note.gif Note:

Ensure the version of ADempiere under test uses the web ID class that creates the references in ZK of the form "Field_C_BPartner_ID_...". This feature is configurable after 3.7.1. and the proper class is included in 3.8.0. For other versions, the file may already be there or you will have to add a new class. In the ADempiere repository, the needed file can be copied from commit #64800ef650d5 (AdempiereIdGenerator.java).

To check if everything is working properly, update the utils/test.properties file with suitable references to a running instance and run the run_test.xml file as an Ant build. Logs of the test results will be saved in the utils/target/log directory.

You can configure which test suites run by copying the utils/myTestTemplate.properties file to utils/myTest.properties and changing the SUITE_FILE property to point to the correct suite, scenario or test file (any .sah file will work).

For more control over the test runs, use the Sahi internal controller. See the Sahi documentation on how to use it.

The Sahi Test Framework

The ADempiere Test Suite includes a framework for Sahi tests that includes full support for automated testing as part of the nightly build and eclipse-based test development. The test suite is under the same configuration control as the main ADempiere project. (See Software Development Procedure).

The ADempiere test suite provides wrappers and common functions so that tests can be developed using concepts familiar to ADempiere developers. Tests are organized into test flows which are one or more test scripts focused on a single test case, feature, bug fix or other functionality. Test flows are grouped in scenarios which would cover multiple test cases. A number of scenarios can be run as part of a test suite.

ADempiere Wrappers for Sahi

Sahi functions refer to very low level aspects of the html in the web interface. For usability, maintainability and ease in creating the tests functions, a set of wrappers has been developed that refer to functional aspects of the ADempiere interface and abstract out the complexity of the Sahi functions. In most cases, testers should be able to use these wrappers to develop the majority of their test code.

The main wrappers are defined in the following files:

  • lib/wrapper/dialogs.sah
  • lib/wrapper/fields.sah
  • lib/wrapper/icons.sah
  • lib/wrapper/info.sah
  • lib/wrapper/lookup.sah
  • lib/wrapper/windows.sah

Additional wrapper functions related to dialog boxes are included in files such as

  • lib/model/VPayment.sah
  • lib/model/VLocationDialog.sah

The wrapper functions follow a general naming convention of x<action><name>() where x can be "i" for icons, "f" for fields or "w" for windows. Action is the action in ADempiere, such as Open, Close, Get, Set, etc... The Name is the name of the function or object. For icons, the name is the function performed by the icon. For example, iCopyRecord() or iSaveRecord(). For fields, the name is the type of the field such as Text, TextArea, List, Search, Amount. The window functions Name is either Window or Tab. For example wOpenWindow("Sales Order") or wCloseWindow("Price List");

Buttons such as the Doc Action button can be clicked and the associated dialog processed with a command like fDocAction("Complete")

With these functions, a simple test of the process to create a sales order, process it and pay it would look like this:

wOpenWindow("Sales Order");
iFormView();
iNewRecord();
fSetSearch("C_BPartner_ID", "Joe Block");
iSaveRecord();
wOpenTab("Order Line");
iFormView();
iNewRecord();
fSetSearch("M_Product_ID", "Azalea Bush")
if(_condition(_exists(_span(/^Insufficient Inventory/)))){
	_log("*** Warning: " + _getText(_span(/^Insufficient Inventory/)),"info");
	iOk();
}					
iSaveRecord();
wOpenTab("Order");
fDocAction("Complete");

var $GrandTotal;
_set($GrandTotal,fGetAmount("GrandTotal"));


// Create the payment data.
$PaymentData = [
                ["Credit Card", "Visa"],
        	["Number", "1234"],
                ["Expires (MMYY)", "1211"],
                ["Amount",$GrandTotal],
                ["Voice authorization code", ""],
        	["",""]
               ];
        	
fPaymentRule("Credit Card",$PaymentData);

Where's the testing? Each low level function includes tests of existence and success that will be reported in the log. For example, take the line

fSetSearch("C_BPartner_ID", "Joe Block");

The fSetSearch() function draws on the following code:

/********************************************************************
 *
 * fSearch($FieldName)
 * 
 * Returns the element for the specified field
 *
 *******************************************************************/
<browser>
function fSearch($FieldName){
	/* Double underscores required to prevent scheduling */
	__assertExists(__textbox(/^zk/, __in(__div("/^Field_" + $FieldName + "/"))), "Error: fSearch() can't find field " + $FieldName);
	return __textbox(/^zk/, __in(__div("/^Field_" + $FieldName + "/")));
}
</browser>

/********************************************************************
 *
 * fGetSearch($FieldName)
 * 
 * Get the value/contents of a search field.
 *
 *******************************************************************/
<browser>
function fGetSearch($FieldName) {
	return __getText(fSearch($FieldName));
}
</browser>

/********************************************************************
*
* fRequerySearch($FieldName)
* 
* Requery a record
*
*******************************************************************/
function fRequerySearch($FieldName){
	_rightClick(fSearch($FieldName));
	_click(_link("Re&Query")); //Requery
}

/********************************************************************
 *
 * fSetSearchRq($FieldName, $Value, $Requery)
 * 
 * Set a search field to a value.  Optionally, requery the field
 * if $Requery = true
 *
 *******************************************************************/
function fSetSearchRq($FieldName, $Value, $Requery){
	
	if($Requery == "Y") {
		fRequerySearch($FieldName);
	}
	_setValue(fSearch($FieldName), $Value);
	_removeFocus(fSearch($FieldName));
	_assertEqual($Value, fGetSearch($FieldName), "Error: fSetSearch() failed to set field " + $FieldName + " to value = " + $Value);
}

/********************************************************************
 *
 * fSetSearch($FieldName, $Value)
 * 
 * Set a search field to a value.  No requery.
 *
 *******************************************************************/
function fSetSearch($FieldName, $Value){
	
	fSetSearchRq($FieldName, $Value, "N");
}

The identity function fSearch abstracts the low level html required to identify the field and returns the element ID that can be used to find the field on the open tab. This is the only place such code exists, making it easy to maintain the test software in case the ZK ID generation methods are changed. The identity function also tests for existence of the field.

The fSetSearch() function simply calls another function. fSetSearchRq() which includes the option to requery the field before the value is set. This is useful in cases where new data has been added to the database since the last login. The function fSetSearchRq requeries the field from the right-click pop-up menu, sets the field value and then tests the value to ensure that there is a match. Note that this implementation will flag as errors partial text entries where the application can find a match in the underlying field - "Block" instead of "Joe Block".

Also note the following code:

fSetSearch("M_Product_ID", "Azalea Bush")
if(_condition(_exists(_span(/^Insufficient Inventory/)))){
	_log("*** Warning: " + _getText(_span(/^Insufficient Inventory/)),"info");
	iOk();
}					

After running this test a number of times, you will exhaust the supply of Azalea Bushes, causing a warning to pop up. The if() statement captures this warning condition and makes a note in the log.

Test Flows

A Test Flow is a file where the test code resides. Test flows are not limited in what they can contain but they should follow these rules:

  • The file name should start with "tf_" and the rest of the name should be as descriptive as possible. If written for features that appear in a particular version for that relate to a particular feature, the test should include that info. For example tf_380_ADEMPIERE-72_wip_assignment_info.sah.
  • Test Flows should not include other files as the flows themselves will likely be included in Scenario files that will perform the includes.
  • Function names used in the test flow file should be unique across the system as the scope of the functions will be global. Function names that use the elements of the file name are a good idea.
  • The main test in a test flow is a function that will be called by the scenario to run the tests. There can be more than one but all the test functions need to start with the name "test_". For example function test_wip_product_info(){}.
  • Test flow files should be stored under the project "test" directory tree and in a suitable sub-folder. The directories are arranged according to the main functional areas of ADempiere.
  • Test flows should generally be run through a scenario.

Scenarios

A scenario is a collection of test flows and the supporting files. It is the file that is called to execute the tests. The structure of a scenario file is rather simple as follows:

 // Scenario template.  Copy this file and replace the sections as required.
 // Optional: Call the scenario from a suite file and execute through the utils/run_tests.xml ant build 
 
 // Global Variable declarations 
 
 var $release = "Release 3.6.0LTS";
 var $usr = "GardenAdmin";
 var $pwd = "GardenAdmin";
 
 // Includes - common functions
 //This file includes all other supporting files so you only need to add the one in each scenario
 _include("../lib/common_functions.sah");  
 
 // Includes - test flows - as many as required.  Each should include one or more functions 
 // called test_*() which will execute the test.
 _include("../test/material_management/tf_fr3004020_allocation.sah");
 
 // Setup - called before each test	
 function setUp(){
 	versionTest($release);
 	loginDefault($usr, $pwd);  //defaults
 }
 
 // Tear down - called after each test
 function tearDown() {
 //	logout();
 }
 
 // Run the tests - anything included that starts with "test_".  The order of execution is undefined.
 _runUnitTests();
 
 _log("Scenario completed.", "info"); // Test Completed.
 
 // End of test

There are four parts to the scenario file to note:

  1. The file includes the function "../lib/common_functions.sah" where all the main elements of the wrappers and supporting functions reside.
  2. Multiple test flow files can be included.
  3. Common setup and tear-down functions can be defined for the tests so the starting conditions are known.
  4. The function call _runUnitTests(); will call all the included functions that start with the name "test_" in a random order.

Scenario files should be stored in the adempiere-test-suite/sahi/scenario directory.

Suites

Suite files are simply a list of scenarios to include in a larger test run. The suites are execute by a batch process in sahi (test_runner) or through the Ant build run_test.xml in the utils directory. Create and modify the myTest.properties file based on the myTestTemplate.properties to customize which suite is run and then execute run_test.xml as an ant build.

The format of a suite file is trivial:

//  ADempiere test suite - bugs and feature tests related to release 3.7.0

../scenario/scenario_370_bugfix_tests.sah
../scenario/scenario_370_FR3208588_Rounding.sah
...

Store the suite files in the suite directory.

Developing Tests

One of the features of Sahi that seemed attractive was the ability to record test actions of the user and play these actions back. While this is useful when debugging tests, the recorded test code is not easily read or understood and maintenance becomes an issue rather quickly. The wrappers functions allow the tests to be developed based on test cases without detailed knowledge of how particular fields will be presented in the final html code. It is even possible to develop the tests ahead of the implementation of a feature with knowledge of the field names that will be implemented. The result is test code that is readable and easily maintained.

Start the test development with a clear idea of what needs to be tested. You can document this in a test case based on software logic and/or process logic to determine how the software is supposed to function given a set of data and user actions. As an example, suppose we want to test the indirect document creation process of customer sales orders from order creation to acceptance of payment. We could break this up into test cases based on the order completion and then the acceptance of payment - two distinct interactions. A simple test case for the order completion could include the following:

  • Test case Order Completion
    • Setup or verify that a customer record (BP) exists that will be used to purchase a product.
    • Setup or verify that the purchased product exists in inventory. Note the quantity in inventory.
    • Create a sales order and set the document type
    • Use a suitable customer (Joe Block) and purchase qty 1 of a simple stocked product (Oak Tree) on a sales order. Complete the order document.
    • Test the following:
      • That the order status is as expected.
      • That a shipment document has (or hasn't) been created and completed.
      • That an invoice document has (or hasn't) been created and completed.

Now there are a huge number of variations possible in this test based on all the fields that affect the software and process logic. Customer credit checking, stock level of the product, product BOMs, shipping ruled, invoice payment terms, tax rules are just a few examples. To provide 100% coverage of all possible combinations would be a daunting task. So focus instead on the critical parts of normal processes to verify that the software works as expected with expected data if used in the expected way. This will form the basis of the test case. Then develop the corner cases - those situations where the software or process logic needs to respond gracefully to data or user actions that are not logical.

For example, consider the payment button on the Sales Order window. The normal case would be to accept payment for the full value of the order, the payment would allocated to the invoice and the invoice paid. But the situation is complicated by the payment types (cash, credit cards etc...), the possibility for multiple or mixed payments (some cash, the rest on credit card(s)). Then there are the corner cases of a zero payment, a negative payment, or a payment that exceeds the total of the order. How does the software handle the payment allocations in each of these cases?

Considering the process, the normal process would involve entering data in a specific order and activating buttons. What happens if the order is changed or the process is cancelled at the wrong time? Does the software behave well when the user does something unexpected?

As the complexity of the test becomes apparent, it can be broken down further into manageable simpler tests cases that collectively cover a larger number of possibilities. For the example above, we'll limit the test to the creation of shipment and invoice documents. In the software, this is governed by the following lines from MOrder.java in the completeIt() function.

		//	Create SO Shipment - Force Shipment
		MInOut shipment = null;
		if (MDocType.DOCSUBTYPESO_OnCreditOrder.equals(DocSubTypeSO)		//	(W)illCall(I)nvoice
			|| MDocType.DOCSUBTYPESO_WarehouseOrder.equals(DocSubTypeSO)	//	(W)illCall(P)ickup	
			|| MDocType.DOCSUBTYPESO_POSOrder.equals(DocSubTypeSO)			//	(W)alkIn(R)eceipt
			|| MDocType.DOCSUBTYPESO_PrepayOrder.equals(DocSubTypeSO)) 
		{
			if (!DELIVERYRULE_Force.equals(getDeliveryRule()))
				setDeliveryRule(DELIVERYRULE_Force);
			//
			shipment = createShipment (dt, realTimePOS ? null : getDateOrdered());
			if (shipment == null)
				return DocAction.STATUS_Invalid;
			info.append("@M_InOut_ID@: ").append(shipment.getDocumentNo());
			String msg = shipment.getProcessMsg();
			if (msg != null && msg.length() > 0)
				info.append(" (").append(msg).append(")");
		}	//	Shipment

		//	Create SO Invoice - Always invoice complete Order
		if ( MDocType.DOCSUBTYPESO_POSOrder.equals(DocSubTypeSO)
			|| MDocType.DOCSUBTYPESO_OnCreditOrder.equals(DocSubTypeSO) 	
			|| MDocType.DOCSUBTYPESO_PrepayOrder.equals(DocSubTypeSO)) 
		{
			MInvoice invoice = createInvoice (dt, shipment, realTimePOS ? null : getDateOrdered());
			if (invoice == null)
				return DocAction.STATUS_Invalid;
			info.append(" - @C_Invoice_ID@: ").append(invoice.getDocumentNo());
			String msg = invoice.getProcessMsg();
			if (msg != null && msg.length() > 0)
				info.append(" (").append(msg).append(")");
		}	//	Invoice

To get to this segment of the function, the order needs to pass the prepareIt() stage without error, and have a document type other than:

  • Proposal or Quotation
  • PrepayOrder (We won't test forced creation of a prepay order which is only used in the web store.)

The document types we should test and the expected results then are:

  • Proposal - status Completed, no shipment, no invoice
  • Quotation - status Completed, no shipment, no invoice
  • Prepay Order - status "Waiting Payment", no shipment, no invoice
  • On Credit Order - status Completed, shipment complete, invoice complete
  • POS Order - status Completed, shipment complete, invoice complete
  • Warehouse Order - status Completed, shipment complete, no invoice
  • Standard Order - status Completed, no shipment, no invoice

One corner case is the Return Material document sub-type which should not be available on the Sales Order window.

The test flow we need will have to do the following:

  • Assume the customer and product exist, that there is sufficient stock but deal with a no-stock warning if one appears
  • Assume that the current calendar period is open
  • Perform the following steps:
  1. Open the sales order window
  2. Switch to the form view
  3. Create a new record
  4. Set the fields as follows:
    1. Customer (C_BPartner_ID)
    2. Doc Type Target (C_DocTypeTarget_ID)
  5. Save the sales order header
  6. Open the Order Line tab
  7. Switch to form view
  8. Create a new record
  9. Set the product (M_Product_ID)
  10. Save the record
  11. Return to the Order tab
  12. Complete the Order
  13. Compare the Document Status (DocStatus) with what was expected
  14. Click on the Zoom Across function and look for the following
    1. A single shipment if expected
    2. A single invoice if expected
  15. If a shipment was expected, click on the link in the Zoom Across and verify that the Document was completed. Close the shipment and return to the Order.
  16. If an invoice was expected, click on the link in the Zoom Across and verify that the Document was completed. Close the shipment and return to the Order.
  17. Repeat the above from step three (3) for each set of test data.

For this test, a set of data and a simple test program is all that is required. The test data can be included in the code, but for demonstration, we'll add it to a csv file. The test file will look like this:

/**
 * Test Flow Main Functional Tests
 * 
 * Create Indirect Documents
 * 
 * This test flow tests the creation of shipment and invoices from the 
 * Sales Order window
 * 
 * See http://www.adempiere.com/Functional_Tests
 * 
 **/

// Local test - un-comment next three lines if running just this file
var $mft_create_indirect_documents_CSVFilePath = "../ar_management/tf_mft_create_indirect_documents.csv";

_include("../../lib/common_functions.sah");

mft_create_indirect_documents();

// General test - un-comment if including this file in a scenario
//var $mft_$mft_create_indirect_documents_CSVFilePath = "../test/ar_management/tf_mft_create_indirect_documents.csv";

function mft_create_indirect_documents_detail(
		// Add variables here
	){
	// Add the tests here
}

function test_mft_create_indirect_documents(){
	
	var $data = _readCSVFile($mft_create_indirect_documents_CSVFilePath);	
	_log("MFT_Create_Indirect_Documents starting");

	logout(); // in case
	login("GardenAdmin","GardenAdmin","","","","","HQ","HQ Warehouse");

	wOpenWindow("Sales Order");	
	_dataDrive(mft_create_indirect_documents_detail, $data);
	wCloseWindow("Sales Order");

	_log("MFT_Create_Indirect_Documents completed");
}

This is a standard template for most test flows. Note that the naming "test_..." is important as the scenario will call all the functions that start with "test". The testing is done using the Sahi function _dataDrive() to read in the test data in the csv file and execute the test function for each line of data. The path to the csv file is required and is set at the top of the file. There are two versions - one if the test flow is being used on its own, and one in the case where the test flow is part of a larger scenario. (There is probably a more elegant way to manage this - please suggest it if you have an idea.)

Our test will follow the script above and we'll create a few variables for the test data as we go. Using the wrappers, its pretty easy to generate the test script from the description above. We'll assume that we are logged in with the org and warehouse set and the Sales Order window has been opened by the calling function.

wOpenTab("Order");
iFormView();
iNewRecord();
fSetSearch("C_BPartner_ID",$CustomerName);
fSetList("C_DocTypeTarget_ID",$DocTypeTarget);
iSaveRecord();
wOpenTab("Order Line");
iFormView();
iNewRecord();
fSetSearch("M_Product_ID",$ProductName);
iSaveRecord();
wOpenTab("Order");
fDocActionCheck("Complete",$ExpectedStatus);
iZoomAcross();

At this point, we need to find the entries in the zoom across list. Specific wrappers for this haven't been developed so we'll use the Sahi functions directly. Using firefox Inspect Element, we can see that the entry is a link with names like "Shipment (Customer) (#1)" or "Invoice (Customer) (#1)". We can test for these with an _AssertExists statement as follows:

if ($ExpectingShipment == "Y")
{
    _assertExists(_link("Shipment (Customer) (#1)"));
    _click(_link("Shipment (Customer) (#1)")); // Opens the shipment
    _assertEquals("Complete", fGetList("DocStatus"));
    wCloseWindow("Shipment (Customer)"); // Should leave us at the Sale Order window
    iZoomAcross();
}
else
{
    _assertNotExists(_link("Shipment (Customer) (#1)"));
}

if ($ExpectingInvoice == "Y")
{
    _assertExists(_link("Invoice (Customer) (#1)"));
    _click(_link("Invoice (Customer) (#1)")); // Opens the shipment
    _assertEquals("Complete", fGetList("DocStatus"));
    wCloseWindow("Invoice (Customer)"); // Should leave us at the Sale Order window
}
else
{
    _assertNotExists(_link("Invoice (Customer) (#1)"));
}

Our variable list required is:

  • $CustomerName
  • $DocTypeTarget
  • $ProductName
  • $ExpectedStatus
  • $ExpectingShipment
  • $ExpectingInvoice

We can add these as columns in a spreadsheet and enter the test data to create a csv file with this data.

Joe Block,Proposal,Oak Tree,Completed,N,N,
Joe Block,Quotation,Oak Tree,Completed,N,N,
Joe Block,Prepay Order,Oak Tree,Waiting Payment,N,N,
Joe Block,On Credit Order,Oak Tree,Completed,Y,Y,
Joe Block,POS Order,Oak Tree,Completed,Y,Y,
Joe Block,Warehouse Order,Oak Tree,Completed,Y,N,
Joe Block,Standard Order,Oak Tree,Completed,N,N,

The final test script will look like this.

/**
 * Test Flow Main Functional Tests
 * 
 * Create Indirect Documents
 * 
 * This test flow tests the creation of shipment and invoices from the 
 * Sales Order window
 * 
 * See http://www.adempiere.com/Functional_Tests
 * 
 **/

// Local test - un-comment next three lines if running just this file
var $mft_create_indirect_documents_CSVFilePath = "../ar_management/tf_mft_create_indirect_documents.csv";

_include("../../lib/common_functions.sah");

mft_create_indirect_documents();

// General test - un-comment if including this file in a scenario
//var $mft_$mft_create_indirect_documents_CSVFilePath = "../test/ar_management/tf_mft_create_indirect_documents.csv";


function mft_create_indirect_documents_detail(
		$CustomerName,
		$DocTypeTarget,
		$ProductName,
		$ExpectedStatus,
		$ExpectingShipment,
		$ExpectingInvoice
	){

	wOpenTab("Order");
	iFormView();
	iNewRecord();
	fSetSearch("C_BPartner_ID",$CustomerName);
	fSetList("C_DocTypeTarget_ID",$DocTypeTarget);
	iSaveRecord();
	wOpenTab("Order Line");
	iFormView();
	iNewRecord();
	fSetSearch("M_Product_ID",$ProductName);
	iSaveRecord();
	wOpenTab("Order");
	fDocActionCheck("Complete",$ExpectedStatus);
	iZoomAcross();

	if ($ExpectingShipment == "Y")
	{
	    _assertExists(_link("Shipment (Customer) (#1)"));
	    _click(_link("Shipment (Customer) (#1)")); // Opens the shipment
	    _assertEqual("Completed", fGetList("DocStatus"));
	    wCloseWindow("Shipment (Customer)"); // Should leave us at the Sale Order window
	    iZoomAcross();
	}
	else
	{
	    _assertNotExists(_link("Shipment (Customer) (#1)"));
	}
	
	if ($ExpectingInvoice == "Y")
	{
	    _assertExists(_link("Invoice (Customer) (#1)"));
	    _click(_link("Invoice (Customer) (#1)")); // Opens the shipment
	    _assertEqual("Completed", fGetList("DocStatus"));
	    wCloseWindow("Invoice (Customer)"); // Should leave us at the Sale Order window
	}
	else
	{
	    _assertNotExists(_link("Invoice (Customer) (#1)"));
	}

}

function test_mft_create_indirect_documents(){
	
	var $data = _readCSVFile($mft_create_indirect_documents_CSVFilePath);
	
	_log("MFT_Create_Indirect_Documents starting");
	
	logout(); // in case
	login("GardenAdmin","GardenAdmin","","","","","HQ","HQ Warehouse");

	wOpenWindow("Sales Order");
	
	_dataDrive(mft_create_indirect_documents_detail, $data);

	wCloseWindow("Sales Order");

	_log("MFT_Create_Indirect_Documents completed");
}

Once the script is running as a stand-alone test, it can be added to a scenario. When doing this, the first few lines have to be adjusted to take into account the file path and that the common functions will be loaded at the scenario level. You can see this test in this form in the mercurial repository. The header of the scenario looks like this.

...
// Includes - common functions
_include("../lib/common_functions.sah");

// Includes - setup steps that are order specific
_include("../test/system/tf_mft_create_client.sah");
_include("../test/customer_relationship_management/tf_mft_bpartner_groups_setup.sah");
_include("../test/customer_relationship_management/tf_mft_payment_term_setup.sah");
_include("../test/customer_relationship_management/tf_mft_bpartner_setup.sah");
...
_include("../test/material_management/tf_mft_pricelist_setup.sah");

//Includes - test flows which are order independent
_include("../test/ap_management/tf_mft_rfq_to_po.sah");
_include("../test/ar_management/tf_mft_create_indirect_documents.sah");
...