Capgemini Oracle Blog

Capgemini Oracle Blog

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

5 Best practices for SoapUI Pro and Oracle SOA Suite.

Categories : SOAHow-to

Over the last year I had the opportunity to be involved in an Oracle SOA Suite 10g to 11g upgrade. At the starting point, the 10g situation, automated tests were not yet part of the landscape. As part of our approach to upgrade the SOA Suite from 10g to 11g, we created SoapUI tests to check that the 11g code would behave functionally the same as the 10g code. This ensured the quality of the code and it also enabled us to run a full regression test within 30 minutes. During the project we created around 300 test cases in SoapUI. Next to that we continuously leveraged our experiences while working in an upgrade environment, and improved our test approach during the project. In this blog I will elaborate on five best practices while using SoapUI in combination with Oracle SOA Suite.

 

1. Minimize groovy scripting.

As we made the SoapUI tests based on functional flows we realized that a large part of the SOA Suite projects are reused in various functional flows. To make the maintainability easier, we created some groovy scripts that handle the messaging for these steps. It shortens the number of steps in the test case by letting the groovy script handle multiple steps in the background. Also we would be able to store the groovy script in a scripts folder, and call it from any test cases we would create. See figure 1.

Figure1 Using Groovy script to shorten the number of steps

Figure 1 Using Groovy script to shorten the number of steps

It started out simple, with a few generic post and get message steps. But not long after that, the first if-then-else statements started to introduce themselves inside the scripts. After that started, the scripts gained in complexity. 

Originally it went well, but when we encountered failed tests, it was very difficult to figure out where exactly it had failed and what kind of actions we had to take to solve the issue. We had to do something to make clear exactly which steps are made per test case and where the errors occurred. We had to remove the groovy scripts and use separate steps instead. This resulted in more steps as shown in figure 2, but greatly improved the simplicity in maintenance.

Figure 2 More steps but better visibility.

 
Conclusion: Minimize groovy scripts and create separate test steps to make your test cases as clear as possible.
 

2. Use assertions.

It seems like a no-brainer to test the outcome of your steps(figure 3). Using SoapUI Pro is a big advantage here because it allows you to configure XPath assertions without any knowledge of xpath itself. This makes building xpath assertions fast and faultlessly. Try to make an assertion that covers the whole message, this makes the assertion complete and reliable(figure 4).
Figure 3: using assertions in SoapUI

Figure 4: Xpath match assertion to check the contents of a message.
 
Conclusion: Use mimimum of 1 assertion per test step.

 

3. Use DataSources to test XSL/XPath transformations.

A complicating factor in SOA composites is the inclusion of XSL/XPath expressions. This can be the case for optional parts in your message transformations or DVM lookups. SoapUI DataSources can be used to test variations of transformations(figure 5). 

Figure 5 DataSources can be used to test XSL/Xpath expressions efficiently

The DataSource can also be used as a reference source inside assertions. This will allow you to check also for optional parts in the message(figure 6). 

Figure 6 DataSource reference in an xpath assertions
 
Conclusion: a DataSource is a powerful tool when the flow remains the same but you want to test different variations of messages.
 

4. Use different SoapUI test-cases for each different test case.

We initially started using groovy scripting to support the creation of a test case in SoapUI and use DataSource loop and groovy scripting to reuse test steps and steering the process. The reusability seemed nice but eventually we experienced that the maintainability of all the different flows in groovy scripts deteriorated. Additionally we experienced it was easier, faster and less error-prone to create separate SoapUI test-cases with single test paths.
 
In figure 7 you see 2 typical test cases, a success and a fault flow, are implemented in one SoapUI test case. As it loops in the fault flow, it is hard to determine if a failed step belonged to the success or fault flow.

Figure 7: Using DataSource and Groovy script to support multiple flows in 1 SoapUI Testcase.
 
If you separate these test cases into 2 separate SoapUI testcases, it look as in figure 8.


Figure 8:  one additional SoapUI TestCase is created to get rid of the loop.
 
 
The groovy script, to determine which step to run and which to skip, has been removed. Also the data source and the loop was no longer needed to support the handling of two test cases in one flow.
 
How many test cases should I create?
Start with the flows that are the most common, generally one success and one fault flow. After that, add test cases when new functionality is added in the flows or when bugs arise that were not yet part of your test cases.
 
Conclusion: When you want to build different test cases aim for separate SoapUI TestCases. It makes your SoapUI test-cases clear and maintainable. In case of error situations you can quickly investigate the step where it is failing.
 

5. Create a dummy service for each JMS endpoint.

JMS Queues are commonly used components in a SOA Suite landscape. To interact with these queues from SoapUI you need to trick SoapUI a little bit (see here). Creating one dummy service and adding the endpoints to the service definition seemed initially sufficient. It was working and we did not foresee any problems in the future. 
SoapUI Pro provides you with the option to use the concept known as ‘environments’(figure 9). 

Figure 9: SoapUI Pro's environment configuration
 
This option enables you to change the endpoints of all test-steps just by changing the environment setting. As soon we introduced the second environment into the development world, we ran into the issue that we could not use the environment configuration in SoapUI to set the endpoints for each queue. Because we only had one service definition for all JMS endpoints, we also only had 1 JMS endpoint to configure(figure 10). 
 
When you are working in the default environment setting, you can configure the endpoint per test step. When switching to a different environment configuration all endpoints, for al test steps that are created from the same service definition will change to what is configured in the environment configuration. We ended up with some rework to implement this change in our already created test suites. The situation after each JMS endpoint got its own service definition is per figure 11.

Figure 11: Each service definition has its own end-point.
 
 
Conclusion: If want to use the Environments option in SoapUI Pro, create a dummy service for each JMS endpoint.
 

Thanks for reading this blog, if you have any questions regarding SoapUI and SOA Suite, feel free to reach out!
 

About the author

Martijn van der Kamp
Martijn van der Kamp
I am an Oracle Integration Consultant. My specialization lay in the areas of SOA Suite, BPM Suite, BAM and databases. Colleagues think high of my social skills and customer focus. Furthermore I'm known for my drive and enthusiasm. Within projects I take great responsibility for the work which I deliver, this will act itself in terms of quality and timeliness of delivery.
4 Comments Leave a comment
Well done man! good and useful description..
mvanderkamp's picture
Thanks Mazin!
Nice one. I have been involved in lot of upgrades. It is always big challenge with SOA upgrades. Used SOAP UI for unit testing and simple functional testing, but not like this. This post gives us one more option (SOAP UI) on how we can think of testing the services for complete upgrade. I like use of Datasouces to test XSLTs. This addresses major problem. One other requirement we always have is to compare the output generated by both the SOA versions to make sure that it will not create issues down the line. This will eliminate testing of all underneath systems. Do you see any easy way to achieve this using SOAP UI?
mvanderkamp's picture
Hi Rama, In my opinion you are trying to eliminate the risk of having any difference between 10g, and migrated, 11g code. We used the following steps to cover this: - Build soapui test uppon 10g code - create assertions in the test to check for the contents of the messages - Migrate the code - Run the test on 11g, seeing that the test still works. The output generated by the SOA Suite was, in our case, either a web service callback or a message put on a JMS Queue (which in the real world would be received or picked up by surrounding applications). The JMS Queue is simply read out via hermes, and it was easy to do so. For those processes that are invoked via a web service and that have a callback we provided a callback address in the WSA header. We build a groovy scripts to determine this automatically so we could run the test from any developer's machine. We used the SoapUI mock service step to receive the callback message and to do assertions on.

Leave a comment

Your email address will not be published. Required fields are marked *.