Sunday, 8 March 2015

Restful services: to XSD or not to XSD? That is the question


A questions I've been considering lately is “should we be using XSD to describe and validate our Restful payloads?”. A lot of REST APIs are described in a bit of a loose way (e.g. example payloads not strict definitions). Obviously XSD can define any XML structure, but in an RPC world, the XSD is tied to the operation not a stateful entity. Does this make a difference? Is there a reason XSD is any less applicable to REST services, than to a service defined in WSDL?
 
There’s surprisingly little I can find in the blogosphere about the pros and cons of this – most of the blogs around XSD and REST are more about how to implement using different frameworks / technologies (so obviously it can be done). One blog explicitly said that XSD was a “bad idea for REST”, but not why – so that’s hardly a strong argument. I got to thinking from first principles and after a bit of thought, and some conversations with colleagues at the coffee machine, we agreed that XSD validation is entirely possible, and probably desirable. That said (like a lot of REST use) there are some rules to follow – and this post gives my humble opinion as to what they are.

Foundation Concepts

As a starting point, it’s worth outlining some things I think of are true of all services (feel free to disagree in the comments section).

F1 - Service contracts are good: 
It’s vital for service consumers to know what a service does and how to interact with it. This applies to all component software where Encapsulation and Information Hiding are foundational concepts.

F2 - Machine readable contracts are better: 
It’s better to have a contract that code can read, rather than one in word/a wiki/or on paper. It’s also great if your contract and validation rules are from the same source, that way there’s no chance of disagreement between what the service claims to do, and what the validation allows it to do. This is one reason I’m such a big fan of XSD and so many of the other open standards.

F3 - Open standards are a good thing
REST is built upon the HTTP standards for transport, XML should be based on the open XSD standard for validation. Of course the are other standards (JSON to name but one) – but if you’re using XML, then XSD seems a sensible way to both document and validate you messages – everyone understands it and there is a mass of tooling (both free and commercial) to help.

F4 - Service contracts are not unrestful
Knowing how to represent your state is key to being able to transfer your state. Of course to be RESTful (regardless of how you describe it) the contract needs to be entity focused and not RPC focussed e.g. an XSD “Customer” object, not an XSD “AddCustomer” object.

The Rules

OK, hopefully we’re in agreement so far. In this case how do we do XSD REST validation? It’s actually quite simple but as we discussed various different scenarios we found some basic rules needed to be followed or else we got into a bit of a mess. These rules make a REST XSD quite different to an RPC XSD:

Rule 1: Entities must be described with one schema, regardless of ACTION
Perhaps quite intuitive – if something is mandatory/optional when you create (POST) an object, it follows that it is mandatory when you GET it later, or PUT it back. It also follows that the format of an object remains fixed throughout the life of that object - If not, how is it the same object?

This obviously doesn't mean the state of the object won’t change (that would be silly). Values can change and optional fields can be filled in which were previously missing. What it does mean is:
  • No fundamental changes to structure (e.g. the root element won’t change name)
  • No change to element cardinality (e.g. MinOccurs=1 changing to 0, or MaxOccurs=1 changing to unbounded)
  • No change to data types (e.g. integers change their min/max values or become strings)
The key here is that data validation must be about structure, and the core of what makes an object valid - not about a particular use case. For example in a given organisation a “Customer” might always be invalid without a Surname – this is essentially the same sort of rules we apply in database create table statements.

Rule 2: No partial updates - Entities are created/updated/retrieved as a whole
In order for XSD validation to occur cleanly we can't allow partial updates by POST (or shock horror by PUT). If we trying to write an XSD which allows partial POST updates, it becomes so lax it can’t actually achieve validation. Either everything becomes optional, or there are lots of “choices”. In either case invalid XMLs can slip through the gaps and the XSD will report them as valid.

We did explore the idea of having different XSDs (or an XSD with a choice in it) for different scenarios (e.g. one which describes everything needed for a full update, and another which is used for partial updates. This is possible, but very quickly becomes unstuck because:
  • REST has no procedures - At a conceptual level, attempting to impose “scenarios” on REST is fundamentally troubling, and has the potential to get everyone out of the RESTful mindset.
  • More concretely, there isn't a way to tie different scenarios to different XSDs – the client wouldn't know which XSD to use for a given scenario other than by convention (so you break Foundation Concept F2). Even if you could somehow specify that we use object_get.xsd for the GET action, and object_put.xsd for the PUT action, Rule 1 explains why you shouldn't want to.
OK, but what about the PATCH operation? The pros can cons of PATCH are beyond the scope of this blog, but in terms of XSD, PATCH doesn't really help:
  • The patch object (as described very well in William Durand's blog) isn't the same format as our object – so can't use our XSDs.
  • These patch operations have similar issues of being so loose that things slip though the gaps and it doesn't solve the F2 or Rule 1 concerns in the previous bullets.
OK, so if we follow the rules laid down above is everything easy? Well, almost. There are a few awkward scenarios we identified so it’s worth highlighting them.

Scenarios & Examples

Scenario 1: Elements null on initial POST, then mandatory afterwards.

Example 1: The primary key is missing in POST but needed in PUT

This is OK and breaks no rules, in fact this is normal procedure for the REST spec. The key is in the URL and not in the payload so we’re OK.

Example 2: Some fields are “generated” by the system we POST to, and from then on they are mandatory e.g. creating a “user” object sets their home folder which we want to be mandatory.

This obviously breaks Rule 1 – we’re asking for the validation rules of POST to be different to those of PUT. What we do about it depends on the scenario in question: 
  • Is this actually mandatory from a data validity perspective – only from a business process perspective? Namely is the attribute mandatory at this point in an object’s lifecycle but it might not be later? If this is business process logic, then XSD is the wrong tool for the job - the check should be in application logic.
  • Is this a separate entity? If so then should it be created by its own POST and then passed by reference to this entity’s POST? In DB terms should do we need to create a foreign key object before we create this object?
  • If neither then we're into the horrible territory of either having a lax schema, or making it mandatory but the POST method knowing to ignore the value sent in (and explaining to the client that they have to send nonsense). None of these is ideal obviously.

Scenario 2: The consumer has the rights to GET the whole object, but only has the authority to update a subset of the fields

Example: A consumer can GET an Order, but only has rights to update the “comments” section.

This doesn't break a rule. This is talking about if they’re authorised to change a field, not if the change is valid (authorisation is not validation).

The consumer can PUT the object as normal – the XSD will validate that the submitted data is VALID. If the data is valid then the API needs to see if they've tried to do something they shouldn't (i.e. change a field they’re not allowed to) that’s not an XSD problem:
  • If an unauthorised change has been made, then return a code 403 – forbidden
  • If this is an authorised change then PUT is OK and return a standard response in the 2xx range.
This might be inefficient (comparing the whole object to ensure no unauthorised changes have been made, but it’s not unrestful and it keeps to the rules. To improve efficiency we might be better to spin up an OrderComments API with a PUT to allow this action to only change comments.

Scenario 3: Consumer doesn't have access to view every part of an object (some are obscured/removed based on their profile), but wishes to PUT.

Example 1: Depending on profile some consumers see a customer’s phone number, others see 07**********12

This doesn't break a rule, and is just another form of Scenario 2 – the consumer doesn't have rights to change this field, so they just pass back the masked phone number.

Example 2: Depending on profile some consumers see an Order’s payment information, for other consumers it is omitted entirely (e.g. what happens with Mashery Response Filters).

In order for the message to validate against the XSD, the fields which were removed cannot be mandatory – otherwise the GET would fail its own XSD (which would obviously be bad).

As long as a value of the PUT contains what was in the GET then the consumer isn't breaking Rule 1 and from their point of view they're passing back the whole object, so they're not breaking Rule 2 either. As with Scenario 2, the responsibility is with the API to manage. This could also be inefficient and is possibly a bad idea, but the issue is with API design and not with the validation.

Conclusion

As far as I can tell XSD is perfectly doable with Rest so long as certain rules are followed. Within these constraints, and with good API design, we can both document and validate our payloads using XSD. There are some use cases where we push these rules, but I've not yet found a scenario under which the rules break. 

If anyone would like to feed back any comments I'd be glad to hear them.

Wednesday, 25 September 2013

Security Radar Graphs (2) - Implications of security requirements

As seen in the last post, radar graphs are a useful way of capturing the security requirements of a service or set of services. As promised, this second part both defines what is meant by the different levels on each axis and notes some basic considerations which need to be made when making a selection for any given aspect of SOA security.

To recap, the radar graph in question looks a little like this:


So if we take each axis in turn are the descriptions and the considerations which they warrant. Sorry the formatting is a bit ugly - Blogger doesn't seem to support tables very well.

Network Filtering

The level to which your network security filters who can get access to your SOA.
  • None: Anyone on the internet can access services
    • No reduction in traffic by the network layer, this increases the need to carefully consider network separation to prevent common attacks.
  • Firewall: Partners access services over the internet - restricted by firewall settings (e.g. IP filtering)
    • IP spoofing is possible. 
    • Cloud partners can often change IP addresses - can be difficult to maintain an accurate white-list and a given IP range can't be guaranteed to be that of a trusted partner. 
    • Multiple potential consumers exist within the partner network (in the partner IP range). These consumers are indistinguishable based on IP address. 
    • Therefore... whilst more secure than "None", network separation and/or authentication are still important.
  • VPN/Leased Line: Selected partners can access services via a secured channel
    • Some external providers (especially cloud providers) may not accommodate such a channel 
    • This potentially exposes the core network outside of the data centre - consider the importance of firewall rules to restrict the services that can be accessed (e.g. just the ESB) as well as authZ/authN.
  • Closed Network: No off site access to services
    • No benefits of partnerships, outsourcing, cloud technology etc 
    • Whilst secure, this may not be a possibility given other these considerations

Network Separation

The level to which your network security isolates your SOA from external access.
  • None/Reverse Proxy: Basic reverse proxy (not XML aware gateway) is used to separate consumers from the ESB. This sort of proxy may be able to provide SSL but not WS-Security. It cannot detect attacks SOA specific types of attacks (e.g. XML injection)
    • The whole ESB must be considered externally facing (whatever network filtering determines 'external' is). 
    • If network filtering is 'none' (i.e. open to the internet) this approach is woefully inadequate, if the only external access is via a VPN from a trusted partner then this may be acceptable.
  • Second ESB: Have an 'externally facing' ESB (behind the reverse proxy) before messages get to the internal ESB (e.g. this MSDN pattern)
    • This can mitigate against certain problems (e.g. DoS attacks impacting the internal ESB), and can provide a layer of separation. 
    • It won't guard against attacks as thoroughly as an XML Gateway. 
    • Will likely require more custom code/detailed config to detect attacks than a COTS XML gateway would do as a mater of course. 
    • As with 'None/Reverse Proxy', the nature of the network filtering will influence the applicability of this solution.
    • This solution also necessitates additional services to pass traffic from outside to the internal ESB - thus additional config/maintenance and licensing costs to implement the 2nd ESB.
  • XML Gateway: Buy a COTS XML gateway specifically designed for exposing services to the internet (e.g. Vordel/Oracle API Gateway, IBM DataPower, Cisco ACE). This is significantly more powerful than a reverse proxy because it's XML aware, and has inbuilt security and threat detection
    • Initial cost outlay can be high, although the ease of configuration vs a second 'external ESB' can balance this a bit.
    • The hardening of such devices is more expertly developed and tested than custom config/code so is not only better able to detect complex attacks but also gives better peace of mind.
    • These devices can offer additional benefits in terms of security policy management (see below).

Security Location

Where in the architecture security is enforced.
  • Perimeter: Security is only applied at the perimeter, internal traffic is considered 'trusted'.
    • Only protects against external threats, despite research suggesting 70% of breaches are internal. 
    • In an increasingly interconnected service oriented world, is this internal/external split an oversimplification - can we keep thinking in these terms?
  • ESB: Security is applied to consumers of ESB services, but no security exists between the ESB and back-end systems
    • Is there anything to stop consumers going direct to a back-end service if only the ESB services are secured (e.g. internal firewalls)? If not this may not be good enough.
  • Endpoint (trusting): Services on the ESB and back-end services are secured, however back-end services only validate that their consumer is the ESB, and trust the ESB to perform downstream validation.
    • This is a generally good model so long as the answers to the following questions are 'yes':
      1. Do system/data owners trust the ESB development and security governance structures?
      2. Does this trust delegation meet the audit requirements for back end services (the back-end system doesn't need to record the ultimate consumer)?
      3. Are services accessed only via ESB and not directly?
  • Endpoint (aware): Services on both the ESB and back-end are secured, and validate the ultimate consumer. The ESB merely mediates the connection but doesn't break the security chain.
    • Must be done carefully to maintain SOA decoupling principles - if back-end systems are aware of which systems are accessing them it makes change more difficult and blows a hole in the idea of fully decoupled systems. So long as this is done with some thought it's not a big problem (e.g. by separating security from service configuration and with appropriate governance).
    • Endpoint security might not be an option for many legacy systems and certainly not distributed policy based endpoint security. This is especially true if the ESB is connecting via an adaptor rather than by calling a web/rest service.
    • Even if this possible, systems may require re-engineering to achieve this.

Policy Separation

The level of separation between security policies and service implementation.
  • None: Security hard coded into service endpoint
    • A reasonably bright 6 year old should be able to point out why this is a bad idea so I'm not going to say much about it. Unless it's impossible to avoid, this a bad option.
  • Config: Security in service but separated into configuration rather than code - can be changed without redeployment
    • This is more sensible, and is often the way that policy is applied. It makes ongoing maintenance a bit more difficult than with a central policy, but there may be any number of good reasons to do this:
      • Technology doesn't support central security policy.
      • There isn't the desire to pay for a central management.
      • There isn't sufficient governance to drive a more joined up approach, each service developer has to do the best they can (not a perfect reason obviously).
  • Central Policy:  A policy is configured and maintained at a central policy server location and disseminated to endpoints automatically
    • The nirvana of policy administration is having all policy regardless of what/where it's applied stored in a central security policy server. 
    • Obviously there are cost implications and many existing services/technologies may not support this (as ever legacy is a big challenge). 
    • This is interesting and complex enough to write whole books about - here we'll just say it's nice if you can do it.

Authentication - System

The way a service identifies which consumer is invoking it.
  • None: Anyone can consume service
    • Even if service has no security requirements this makes it difficult to enforce governance (i.e. registration) and keep track of consumers. This governance answer can alternatively be addressed with a UDDI registry but if that's not your approach then authentication can be useful to ensure you've not got any unknown consumers. 
    • If you don't know who is consuming a service how can you know when it can be switched off?
  • Transport: The transport medium confirms consumer identity (e.g. mutual SSL certificates)
    • This can be more difficult to manage and change than credential based authentication. 
    • It can also be harder to handle multiple 'hops' across intermediates whilst still keeping track of the originator's identity.
  • Credential: A credential embedded in the message identifies the consumer
    • Credentials can authenticate the sender however this relies on the message being secure to ensure the message isn't altered, and to ensure the credentials aren't captured and used to send future messages. 
    • Unless the network is completely secure (if there is such a thing) this requires a level of encryption (see 'Data in flight').
  • Digital Signature: The message is hashed and digitally signed
    • Signatures can prove not only the identity of the sender, but also confirm that the message has not been altered in flight. This can also useful if the message is sent over an insecure medium as the credentials cannot be read and replayed.
    • As with message level encryption (below), if the credentials have to be passed downstream then the message cannot be altered. This makes the decoupling (type mapping) benefits of SOA more difficult to realise.
    • On one hand keys can be a little more difficult to manage than credentials, then again given their asymmetric nature they can be distributed easily and securely because only the public key needs to be sent from the consumer to the provider and can be done so without any concern about interception.

Authentication - User

The way a service identifies what end-user has caused the service to be invoked.
  • None: No end user identification or user involvement
    • For example a system calls a service based on a timer.
  • Trust: User is validated by the consuming system and any authN/authZ in the middleware is based purely on that system (see Authentication - system)
    • Can the end-system be trusted to restrict access? If so the service can delegate this responsibility and merely authenticate that the consumer is a trusted system.
    • This assumes that audit requirements in the back-end system do not require knowledge of end user. If the service is performing an update this might not be true.
  • Corporate: Access is controlled based on credentials held in an organisation's user list (e.g. against corporate Active Directory)
  • Partner: Access is controlled based on credentials for internal users or partners users (e.g. against Active Directory Federation Services)
    • Do both partner organisations have federated AD?
  • Global: Access is controlled based on global user Ids / social sign-on e.g. For self service user accounts for websites
    • Much more complex identify management suites required - although it's hard to imagine when this sort of end user account would be trusted within the SOA rather than used at a web/mobile gateway and trusted.

Data in flight

The protection given to data whilst in transit between consumer and provider or vis-versa.
  • None: Unencrypted
    • Is any data sensitive? 
    • Is it on a trusted network (e.g. internal data centre, not the internet)
  • Transport Encryption: Encrypted transport e.g. HTTPS or queue encryption.
    • At any point where the transport terminated this terminating system (e.g. ESB/proxy/gateway) must be trusted. Support staff can review this information, the system may log this in plain text. If the intermediaries are trusted this maybe sufficient.
  • Partial Message Encryption: Key message elements are encrypted e.g. XML Encryption of payment details.
    • Where intermediates are not trusted this is a great solution. There are no problems with intermediaries, and it makes the whole question of masking (see 'Data at rest') far easier if any messages logged by the middleware are logged as encrypted messages.
    • There are downsides however: 
      • Once encrypted, the data in the message cannot be validated by the middleware. This reduces some of the benefits of the middleware in stopping potentially poor data 'at the front door'. 
      • This can also introduce security problems of its own, as the perimeter security cannot be as thoroughly applied to encrypted messages. For example an encrypted message (or encrypted element) can't be scanned - e.g. for SQL injection, XML explosion attacks.
  • Full Message Encryption: The whole message is encrypted.
    • The same as the above but the downsides are exacerbated.

Data at rest

The protection given to data when not in transit over the network.
  • None: Data at rest within the architecture is not protected
    • This is fairly common from what I've seen - data at rest is stored within the core of the network and not encrypted although obviously it's protected by the OS/network security.
  • Masking: Data is masked when written to intermediate storage e.g. ESB logs
    • One of the key considerations which needs to be made for each service is can data be logged to audit/exception logs? If not how do we implement field level masking? Are only some fields masked or do we mask all body data? 
    • How does this impact the ability of support teams debugging?
  • Encrypted:  Data is encrypted as it's written either in part (field level encryption) or at the level of complete message or even complete database encryption.
    • Is logging done to a DB or file system? Does the FS/DB support encryption? Are messages already encrypted? How frequently do these need to be retrieved? Can the support staff easily decrypt to be able to fix issues?
Right that's all for now - sorry that turned into a longer post than I'd expected when I started thinking about this topic. Any thoughts, omissions etc feel free to add them in the comments below.

Sunday, 22 September 2013

Security Radar Graphs - Capturing SOA security requirements

Securing an SOA is a complicated business. It isn't a single concept but various moving parts such as: encryption, authentication, and non-repudiation. Not only is there no "one size fits all", but this multi-dimensional nature means there isn’t even a single sliding scale for low, medium and high security. One service might need transport encryption (e.g. SSL) and non-repudiation but have no need for encryption at rest behind the firewall; another requires encryption end-to-end including at rest, but not need non-repudiation - neither is more secure, merely differently secure. In an SOA some security requirements will be enterprise wide (e.g. perimeter security), whilst others will vary on a service by service basis (e.g. authentication/encryption levels). This quickly becomes a complex picture.

Aside from the wasted expense, applying unnecessary security can reduce flexibility, increase complexity, and deny the benefits of the cloud. Obviously business requirements come first (it's why we're here after all), and tight budgets mean security needs to be targeted to where it's really needed. In this security vs flexibility consideration, one final aspect is worth considering: how does security affect SOA benefits? There's one obvious example, message level encryption can reduce the ability of intermediaries to validate messages, so the ESB or XML firewall can't stop invalid messages. Here a security option can directly impact a non-security aspect of the solution.

It's obvious that defining SOA security is complicated to say the least. This was a challenge we had whilst discussing security requirements recently. The solution we found was a radar graph showing options which looked a bit like this:



Looking at the graph:
  • The three axes above the horizontal (Network filtering, Network separation, Security location) will usually be determined for the architecture as a whole.
  • The two horizontal axes (policy separation, data at rest) could be decided on a service by service basis but will likely have wider impacts if security is increased (i.e. introducing central policy management will require new products, masking audit logs will potentially impact all services if a common logger is used.
  • The three axes below the horizontal can be determined on a service by service basis and as such can be included in the service NFR template.

In a future post I'll discuss the impact of the various options on each axis in terms of cost, flexibility and any other knock on effects.

Thursday, 4 July 2013

SoapUI: Data Driven Testing (without going pro)


One of the neat things which SoapUI Pro offers is called data driven testing http://www.soapui.org/Data-Driven-Testing/functional-tests.html. The idea is that you can run a test several times and pull data out of a data source (file, database etc) to use in your tests. Sadly not all of us have SoapUI Pro but it's entirely possible to do something similar with SoapUI free (as Theo Marinos demonstrates in his excellent blog on the subject).

The above blog chooses a random value from a file each time you run a test in order to test a service. This is great for load testing, or if all you want to prove is that a service works correctly with one of a given set of values, but what if you want to test that for a given input (or inputs) to the service returns a given output (or outputs). Well that's also quite easy to do to. I've put together a simple example which reads every value in a csv file, calls a web service using the input value and ensures the response contains the expected response from the same line of the file.


Creating a service to test

As I didn't have a real service to test, the first thing I did was to create a SoapUI mock test service based on a very simple WSDL. In the mock service I put some groovy to generate a response based on the input message
def holder = new com.eviware.soapui.support.XmlHolder( mockRequest.requestContent )
def name = holder["firstName"];
if (name.substring(0,2)=="Mr")
    requestContext.greeting="Hello "+name;
else
    requestContext.greeting="Howdy "+name;
This generates a "greeting" variable. If the "firstName" element starts with Mr then it responds "Hello <name>", otherwise it replies "Howdy <name>" - simple. Then in the mock service I put this greeting variable to use:
<soapenv:Envelope xmlns:xsi="http:www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:examples:helloservice">
<soapenv:Header/>
    <soapenv:Body>
       <urn:sayHelloResponse soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
          <greeting xsi:type="xsd:string">${greeting}</greeting>
       </urn:sayHelloResponse>
   </soapenv:Body>
</soapenv:Envelope>

Creating some test data

OK so now we have something to test lets generate some test data. A simple CSV:
req_firstName, resp_greeting
Bob, Howdy Bob
Fred, Howdy Fred
Mr Jones, Hello Mr Jones
Jill, Howdy Jill

The first line is the name of the properties these will be stored in, the other lines are the values of those variables. As will become apparent in a second the groovy is completely generic and doesn't need changing as it can read any number of values into variables based on the first line.

So now we have a service to test and some data we need to setup a test...

Configuring the test

I setup a project with a very simple Test Case. It only has 4 test steps:
  1. A groovy script to load the given values from the file
  2. A soap test step to call the mock web service
  3. A conditional goto step to return to the start
  4. A step called "End" (the name here is important as we'll see below but the step can be of any type - in my case I used a delay test step)
The other thing which is needed is some properties in the test case:

  • TEST_FILE: the name (with path) of the csv file created above
  • One property for each column in the CSV (with the same name as the header row i,e, req_firstName and resp_greeting)
The only property which needs to be named as above is the TEST_FILE as all the others are user defined.

1. Groovy

 tc = testRunner.testCase;

// First run through: initiation
if (context.fileReader == null){
    log.info ("##Starting Data run... File="+tc.getPropertyValue("TEST_FILE").toString());
    context.fileReader = new BufferedReader(new FileReader(tc.getPropertyValue("TEST_FILE")));
    line = (String)context.fileReader.readLine();

    // Get variable names
    context.variableNames = line.split(",");
}

// Process each line
context.valueLine = context.fileReader.readLine();

// Data to process: load values
if (context.valueLine!=null){
    // Assign the parts of the line to the properties
    for (int i=0;i<context.variableNames.length;i++){
        variable = context.variableNames[i].trim();
        value= context.valueLine.split(",")[i].trim();
        log.info "Assigning: $variable=$value";
        tc.setPropertyValue(variable, value);
    }
}
// No data to process: tidy up
else{
    context.fileReader.close();
    context.fileReader=null;
    context.variableNames=null;
    testRunner.gotoStepByName("End");
}
The above can be pasted into a groovy step unchanged for any number of properties read from any csv data file.

2. SOAP Test

My soap test step request contains a reference to the request property:
<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:examples:helloservice">
   <soapenv:Header/>
   <soapenv:Body>
      <urn:sayHello soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
         <firstName xsi:type="xsd:string">${#TestCase#req_firstname}</firstName>
      </urn:sayHello>
   </soapenv:Body>
</soapenv:Envelope>
The assertion in the test script (of type xpath) checks the response has the right value in it (again based on the csv value.
Declare:
declare namespace urn='urn:examples:helloservice';
declare namespace soapenv='http://schemas.xmlsoap.org/soap/envelope/';
/soapenv:Envelope/soapenv:Body/urn:sayHelloResponse/greeting
Expected Result:
${#TestCase#resp_greeting}

3. Conditional goto 

The goto step should always go back to the groovy step. When all the rows in the file are completed, the groovy code above will jump to "End" and thus stop the loop - this is why the end test step has to be called "End". In order to make the conditional goto unconditional we just need to make the condition always true - by use of the xpath true() function.

4. End

Finally the End step is just needed so they groovy script can jump out of the loop. It doesn't matter what this step does so long as there is a step called "End" after the goto step.

Running the test

So as the above shows there were 14 test steps (the first 3 steps executed four times for the four people in the csv, then the first and last steps executed to end the cycle). The request and response variables contain the values of the last line in the file as I didn't clear them. From here I could add more variables to the file or add a different set of tests - each test case is self contained so each one can have a different TEST_FILE property and different variables. Simple.

If the service didn't give the expected response to one of the values then the soap test step would fail because of the assertion.

Licensing

All the above is open source under the GPLv3 license without any guarantees or promises, that said it's not very complicated and seems to work well enough - hope it's useful.

Monday, 1 July 2013

Starting SOA - Hints and Tips

Introduction

Building SOA on solid foundations is really important. I've come across so many SOA implementations which have kicked off without really considering, or perhaps just misunderstanding some key concepts of SOA and so making some mistakes which are difficult to fix in retrospect (especially when a project is over and the funding is gone). Initially I was going to write a set of principles for SOA, but this was a bit of a grand title for an unstructured cooks tour of "stuff I've seen go wrong and how to avoid them". These cover ground from both the big picture (governance, standards etc) and low level (how to best construct XSD interfaces) side of the messaging picture. So in no particular order...

1. Make interfaces strict

One of the things I come across surprisingly often is the idea that to be "reusable" an interface should be as lax (as unrestrictive) as possible. Before we start considering this, it's probably worth defining what I mean by a lax and a strict interface definition. So ignoring if the data type below is a good example or not, here’s an example "person" data type. The first would be a strict interface:
  • Title: Required, Enumeration [Mr, Mrs, Miss, Ms, Dr]
  • Surname: Required, String (conforming to regular expression ^[a-zA-Z\-\']{2,25}$ - i.e. 2-25 chars in length, containing only letters and hyphens/apostrophes)
  • Firstname: Optional, String (confirming to regular expression ^[a-zA-Z]{1,25}$ - i.e. 1-25 chars in length, containing only letters)

However a lax version would be:
  • Title: Optional string
  • Surname: Optional string
  • Firstname: Optional string

The differences should be fairly obvious, the lax interface will allow almost any values, the strict interface is very prescriptive about exactly what is acceptable.

Although I disagree with the argument for lax, I'll try to give it a fair hearing. The argument seems to go something like this: "if we make the interfaces restrictive then we can't reuse the interface (e.g. with another back-end system, or change the back end) without having to alter it". Put another way if add a new system (either with content based routing or to replace the existing one), and this new system has surnames can be up to 50 chars, or the title can now include "Admiral", then the lax interface can be reused, whereas the strict interface can’t. That is indeed true to a point, but there are some massive disadvantages of a lax interface:
  1. Service consumers aren't aware of what is allowed - or if they are it's in a document, or comments field which has more chance of being ignored or misunderstood
  2. Neither the service consumer, nor the service provider can do real-time validation of messages to the level required to stop errors. In the above lax interface I can validate that the message passes some values called Title, Surname and Firstname, and that it doesn't pass something called "Nickname" but that's about it. Passing in a surname 30 characters long, and a title of "Master of the Universe" would be perfectly valid according to the interface. Because it’s valid the message would be passed to the back end system. Given that this is invalid, it's likely this would cause an error - if the system doesn't validate the input either, then an error would be created when you tried to write an overly long string to the database. This isn't great for several reasons: first the errors are all the way down in an application log, secondly they might not be easy to identify the cause as an invalid message (it might look like an internal application DB error). The back-end may or may not pass an error back to the consumer and the error might or might not mean anything (e.g. a generic DB error). None of this is great, but there is also a worse scenario: if the back end is a creaking old mainframe/or C application which has fixed length variables. The title variable is 4 characters long (to save memory because Miss is the longest allowed title), and writing "Master of the universe" overflows this variable straight into the memory allocated for something else with unpredictable consequences. That's a little less likely, but as a rule propagating bad data around the system is never a good idea when it could be caught at the front door.
  3. One final option. Assuming it's agreed that stopping errors before they hit the backend is a good idea, there's no reason that you can't do some validation in the ESB and still have a lax interface. True. There's no need to encode this in a WSDL/XSD. This would get over the problems of the errors not occurring in the right place or breaking the back end, but there are problems here too. If you're doing real-time testing you have to write some rules rather than just click the "validate" checkbox in that nice expensive ESB tool you bought. Secondly (and going back to point 1) the consumer can't validate their messages whilst unit testing unless they write some complicated test harness (rather than just generating a mock service from your interface (say in [soapui.org]) - and who is to say if this test harness is the same as your test code. This makes unit testing harder and more expensive, and means you'll be less likely to find errors before link/integration testing - and as the old adage goes: the earlier you find an error the cheaper it is to fix. Finally you're losing the supposed benefit of having a lax interface: the ability to reuse without change as you’d need to change the ESB rules, and your test harness anyway!

There is one last thing to consider which speaks against the myth of lax interface benefits. Adding a new backend will almost always have some impact on a service consumer and/or interface. Change isn’t bad, unmanaged change is. This all comes to the need for governance (see below). At least if you've got strict interfaces, you understand what the current state of play is. If the new system takes 24 character names then you know there's a problem just be looking at the existing interface, if it takes 30 character names you can either continue as you go (you're just not using the full length - useful if running two back-ends as you're taking the lesser of the two) or you can re-version the interface and inform consumers they can make their surnames longer if they wish to by using the new version. If you're just taking open strings it's not at all obvious if this is an issue or not because you've no idea what rules everyone else is playing by. After all, an interface definition is merely a contract saying what is required, if a back end system requires a surname to be there, but you make it optional then all you're doing is drafting a contract you can’t meet. If a consumer sends a message with no surname (fulfilling their part of the deal), then rather than sending back a positive response you send an error - which isn't very fair.

2. Decouple systems don’t just separate

Decoupling systems is more than separating them by putting an ESB between them. To get the benefits of decoupling (reuse, hiding the complexity of multiple back ends behind a single service, isolating the impact of change etc) it takes much more than just passing the message through an ESB. An ESB is not a reverse proxy. I have on more than one occasion seen an architect tell developers to “just use the XSD from the backend” as the XSD for the front end (sometimes with a different namespace – sometimes not even that)! Why you'd do this I'll never know (unless it’s to tick a box saying “I used the ESB like I was told to”. In order to actually decouple systems, you need to encapsulate the implementation in a suitable service. It's always going to be difficult to have truly system independent data types, but by trying to generalise the interface and considering carefully what functions are required means you might get some actual benefit of using an ESB.

3. Development standards and libraries

As with any component based development it's important to have a view on how the software will be constructed. This will come from the Technical Architect or senior developers. Whilst this will be expanded during the first few projects, even the first services need some guidance about:
  • What are the reusable components in the middleware (e.g. logging service, security module)
  • What is the structure of development projects, source repository etc
  • What coding standards will be followed. Depending on the tool this might be based on industry standards (e.g. for JBoss ESB the Java standards will be followed). Even if there are some standards to follow, these will need to be expanded for this SOA implementation to answer questions like: is the preferred method of mapping XSL, XPath, or ESQL? What is the naming convention for elements, types and enumerations in XSD/WSDLs? What is the naming convention for queues, namespaces, endpoint URLs? These don't take a long time to map out, but are worth doing up front as having each developer coming up with their own way of doing things just make maintenance difficult.

4. Start your governance early

There are lots of cool governance tools out there, from RegistryRepository tools, to real-time management and monitoring suites. These allow you to do real-time service discovery, manage subscribers, throttle services, give differing QoS to different consumers, and even scale your infrastructure from the cloud based on demand. It's all very cool stuff. Then again, if you're just getting started then your first few services don't need any of this - a few simple and relatively easy to implement governance steps can pay dividends in allowing you to grow and alter your SOA without running into trouble, and will allow you to switch to the shiny toys at a later date.
  1. Governance structure: Setup a governance structure and process, in my experience this is usually two tier:
    • Technical board: to sign off designs, approve deviation from standards, discuss and decide on new standards - meets frequently (e.g. every two weeks)
    • Steering group: to set stratigic aims, approve the standards in the first place, manage pipeline etc - meets less frequently (e.g. quaterly)
  2. Version services: after all, they're going to change eventually (especially if you've got prescriptive interfaces). Change isn't a problem, but does need managing. Most SOA toolkits can happily run multiple versions of the same service along side each other, so if the service is properly versioned you can upgrade clients one at a time rather than having a big bang release (or more dangerously altering a service without a consumer knowing it's going to happen until it's too late).
  3. Document your services: In addition to the XSD/WSDL each service should have a contract (document, wiki page etc) to define additional data around the service. This can hold non-functional details (e.g. max response time, maximum expected load, maximum message size); invocation details which aren't in the XSD (is this at least one, at most once or exactly once - so is the consumer expected to retry until it gets a reply ot not); another thing of use is a sample message so anyone wanting to use the service can see an example and use the example in their unit testing - this isn't a substitute for strict service definition but can be useful.
  4. Create a service catalogue: there are a lot of reasons for a service catalogue and quite a few things to record. Eventually you can consider adopting a RegistryRepository tool if your SOA grows to a sufficient size, but to start with a spreadsheet is usually sufficient. The service catalogue allows people to find existing services and allows you to manage them. You might want to have two levels of catalogue: business services (exposed to the world for use across the enterprise) and technical services (re-used within the ESB either for common functions like auditing or, as building blocks to make composite business services). At a minimum every good service catalogue should include:
    • Service name: The logical name of your service
    • Description: A short description of the service, to allow future consumers to identify if this is what they're looking for, and so you can see what you've got before you accidentally build duplicates
    • Version: The version of this service, this means you can track more than one version of the same service - the name and description might not change but the items below will be different between service versions
    • Consumers: Who uses this version of the service? This is important as eventually you'll want to turn services off (an often used rule of thumb is to keep consumers at most one version behind the latest, and decommission older services). Knowing who uses a service can ensure you avoid suddenly breaking an important application who is on an obsolete service version
    • Status: What phase of the service lifecycle is this service in - there are various granularities in a service lifecycle but at a high level this generally looks to be: 
      • Identified (as a viable/useful service to build)
      • In Development (including analysis, build, test etc)
      • Live (in the production environment)
      • Deprecated (an old version of a service scheduled for closure - new users shouldn't start using this version but existing users may still be doing so whilst planning to upgrade)
      • Closed (retired, deleted, gone)

5. SOA principles and standards

In addition to the development standards (point 3), there are also higher level architecture standards. These have probably been at least partially considered when deciding to go down an SOA route, and doing product selection (i.e. if you've got an ESB and a BPM product there is presumably a vision for how these will be used), however these can be expanded to give more technology guidance at a level above that of development coding standards but below that of the enterprise vision. These would include things like:
  1. When to use the SOA: in many large organisations there are a combination of integeration tools for managed file transfer, EAI, and ETL in addition to the ESB. On occasion I've seen "the bus is the answer" mentality creep in. With a step back these are obviously different challenges with widely different use cases but just because the ESB is the tool of the moment doesn't mean it should suddenly be processing overnight runs of terabytes of data. Nor should all interfaces suddenly be rewritten to use the bus if there is no hope of reuse.
  2. Messaging approaches (SOAP vs Rest, JMS/MSMQ/WMQ, when to use queues vs http, when batch is more appropriate than real-time, should services be WS-I compliant)
  3. Security standards: do services need to be secured, encrypted, can messages be logged in plain text, how are third parties connected to the ESB (if they ever can be), how can the internet connect to services?
  4. Components of the SOA: ESB, UDDI server, registry/repository, BPM, etc. What should be the domain of each product (e.g. what should be composed in BPEL, and what in BPM).

Of course as with all principles these things can be contravened if there's a good reason to (through the governance process above). An example would be a principle that "get" services go over HTTP, but "update" messages will go over JMS (especially if reliability is required). If a system cannot send JMS (say for firewall reasons) but reliability is still required then for this service WS-ReliableMessaging could be implemented, if reliability wasn't needed then the service could use http and either be retried or not if it when failures occur. In general JMS might make sense for an organisation using Oracle Service Bus, because it's based on Java and JMS is generally easier to interact with the WS-ReliableMessaging.

6. Service isn't SOAP, SOAP isn't HTTP

Having some basic guidance for what to use is a good idea as discussed above, but there is one myth I have hit on a few occasions so I’ll put it straight here: SOA doesn't mean SOAP, SOAP doesn't mean you always use HTTP (it can be sent over JMS or SMTP for example). It may be that when defining your principles you select SOAP, which is fine (I'm not against SOAP - see note below), but there may well be times when XML over JMS, or a JSON REST service would be appropriate. If it looks like I’m picking a specific example of the wider point from #5, then that’s because I am. This is just a case of exceptions to a principle, but I keep seeing people try to make SOAP do everything, or use HTTP for everything when XML over JMS, or when a binary message would be better. First the principle shouldn’t just say SOAP over HTTP (unless you never need reliability, you have a reason not to use a queueing technology, or because you like the WS-* extensions) – most SOA architectures need both HTTP and a queue (be it MSMQ, IBM WMQ or JMS), but furthermore SOAP isn’t the answer to everything.

Note on SOAP: As a slight aside, a lot of people don’t like SOAP but I do. Not because of the elegance of the specification (I personally think WSDL is as ugly a standard as you might expect from a committee), but because there are so many tools available which can automatically present/consume SOAP messages (e.g. wsdl2java), or test them (e.g. soapui). Plus who needs the hassel of writing their own custom standard? I can see the arguments behind REST, and if you want smaller messages JSON is a very fine approach. The argument I don't quite understand is that "SOAP is inefficient". XML is verbose and as such a bit bloated and inefficient (that’s true), but I don't see that SOAP is any more inefficient than XML. XML vs JSON I get, but SOAP vs XML? Nope. For the price of a couple of extra envelope nodes, you get all the interoperability of the SOAP stack. You don't have to do dynamic discovery or use the WS header standards if you don't want to.

That’s all folks

Well that rather lengthy selection is my tips for common pit falls of “doing SOA”. Hope it helps.

Friday, 31 May 2013

XSLTUnit with XSLT2.0 (and without exslt)

Recently whilst working on a little XSL project of mine, I came across a rather nifty unit testing suite for XSLT called XSLTUnit. It's a simple way of expressing test conditions for your XSL stylesheet by writing a test style sheet (which xsl:imports the stylesheet it's testing) and expressing a set of assertions. Now this is all well and good (in fact really quite neat) - but when I came to try to use it I came a cropper when using my favourite XSL sandpit Kernow, and also when using the xsl extensions to NetBeans. The error I got from Kernow (just so it's indexed on google) was:
net.sf.saxon.trans.XPathException: Cannot find a matching 1-argument function named {http://exslt.org/common}node-set(). There is no Saxon extension function with the local name node-set
Cannot find a matching 1-argument function named {http://exslt.org/common}node-set(). There is no Saxon extension function with the local name node-set

This is essentially because XSLTUnit uses the exslt extensions, whilst Kernow uses SaxonHE (which doesn't support exslt). Quite frankly I'm loathed to pay for SaxonPE/EE just for this, especilly as it restricts anyone else wanting to run the unit tests (which is a recipe for unit testing being ignored). Fortunately in the years since Eric van der Vlist wrote XSLTUnit in 2002, the wonders of XSLT2.0 have burst into the world, and hapilly XSLT2.0 treats results as nodes without the need for the node-set extension.

This meant it wasn't a hard job to ammend the XSLTUnit library and sample test sheet to XSLT2.0 and get it working in Kernow (and anything else which is XSLT2.0 aware (given that it came out in 2007 it isn't like it's bleeding edge). This tweaking basically involved taking the exsl attributes out of the stylesheet, changing the stylesheet version from 1.0 to 2.0, and changing "exsl:node-set($nodes1)" to "$nodes1" - it really was that easy.

Whilst I was at it, I made one other small change to the test xsl sheet (tst_library.xsl is the example from xsltunit.org). This was to apply the sheets to the standard xslt input stream rather than "document('library.xml')/<xpath>". I couldn't work out why you'd want to include the name of the input xml in the stylesheet itself as it just cuts down the flexibility to run your unit test with lots of input files. After that it all worked a treat.

I did send this back to Eric, but I don't know if he's actively maintaining xsltunit or if he wants to make it xslt 2. Therefore if it's of any use to anyone, I've put this all into a zip for anyone to download - everything except tst_library2.xsl and xslunit2.xsl are the originals from xsltunit.org. The two files with 2 at the end are merely alterations to the same files without the 2 suffix (as described above).

Licensing

As stated on the xsltunit site:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
    The name of the authors when specified in the source files shall be kept unmodified.

THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ERIC VAN DER VLIST BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

My alternations are made in the same spirit, copyright of all the origional bits remain with Eric van der Vlist 2001, 2003.

Friday, 17 May 2013

Running SOAPui Tests from Jenkins/Hudson/Ant


The challenge

A while ago I came across a problem which I suspect has been tackled by a lot of people over the years: running one/many SOAPUI unit test packs from Jenkins. So far all the solutions I've seen from the web basically involve putting an exec call into your ant script, and pointing it at the testrunner cmd/sh script. This is fine and does work, but below is a slightly more complete solution which I hope is easy to just drop in and configure - plus I included some report generation stuff which is useful from a QA audit perspective if anyone wants to check what testing you've been doing.


Essentially this blog covers a simple ant build script which includes:
  • A generic way of calling soapui from ant (using a re-usable ant macrodef)
  • HTML report generation - generating a junit style html report on the result of your tests
  • A command line so you can invoke the build script and enter the test to run (although the only reason you'd not do this from the soapUI GUI is if you wanted the html report for a single test suite).
  • An “all” target (far more useful) which can be configured to run all the tests in the test suites, and can be invoked from Jenkins (which is sort of the point).

Getting Started

Before we get going I assume you have (or need to get) the following:
  1. SoapUI: I originally wrote the script using soapUI 3.6.1 so I know it's good to go with that version and presumably any version since.
  2. Apache Ant: If you wish to generate test reports you need version 1.5 or higher.
  3. AntContrib: v0.3 or higher to be added to your ant lib folder.
To start with there's a simple zip of stuff to be working with. It includes:
  1. build.xml - A sample ant build file – this is the crux of everything we'll look at.
  2. VerySimpleProject – a soap UI project with 2 test suites
  3. VerySimpleProject2 – another soap UI project with a single test suite
  4. junit-noframes.xsl – a modified verison of the xsl which comes with ant but has been altered to make it work better with the SOAPUI

Making build.xml your own

Configuuring Properties

The xml file has a number of properties which need to be configured specific to your environment and/or project. They are:
  • jar.soapui: The name of the soapui JAR file, which is sadly version specific
  • dir.soapui: Self explanatory really, this is the directory soapui is in on the machine running ant
  • dir.temp: a directory to store temporary files created during the execution. This is created upon build and deleted once the run has completed, so make sure it's specific to this purpose and not c:\. Generally ./temp is good enough so shouldn't need to be changed
  • dir.reports: a directory where the report outputs are stored (this needs to exist before the script is run)
  • dir.xsl: the directory the custom xsl for generating reports is stored in, this is by default ./xsl which shouldn't need to be altered
  • java.executable: the full path to java on this machine
  • test.report.title: The title placed at the top of your output report (passed as a parameter to the report generating xsl)

Configuring build all

Once these are set, you're nearly ready to go. All you need to do is say which tests to run. The best way of doing this is to put all the tests in the "all" target and then ant will invoke them all each time. Due to the naming convention I was using at the time this assumes a default test suite called "Full Test" and as such if you don't specify a suite name it will default to that. If you want to call other test suites you just need to name them. This can be seen in the all target example below:

So this calls:
  1. clean - a general tidy up target and creation of the temporary directory (dir.temp)
  2. runTest macro - as no suiteName property is passed to defaults to "Full Test".
  3. calls runTest with a different suite from the same sample project
  4. calls runTest with a suite from a second soapui project
  5. calls the report target. The report target:
    • generates a single html report covering all the tests which have been run (since the last clean - step 1 in this case) 
    • zips up the test runner output log/error files - in case a test fails and you want to look at these detailed logs
    • Sets the names of both the zip and the html to include the date and time the ant script started running - this is just so subsequent runs don't overwrite it
  6. calls tidyup to delete the log directory and a couple of the soapui files left over from the job.

Default target

The default target of the build.xml is "test" which will expect keyboard inputs for the name of the project file (without the .xml on the end) and the test suite. This can occasionally be useful if you want to test and report on a single report, however you might find it more useful to set the project default from test to all.

Really that's all you need to know to get this up and running... but just for completeness I'll do a quick tour of the runTest macro and the reports target if you're interested in understanding the nuts and bolts. If not then you're ready to go.

What's under the covers?

Run Test

The run test macro essentially invokes soapui, but rather than calling the cmd/sh script to do it it does it directly (hence the need to know where java was in the properties steps above). Most of this is quite obvious from looking at the code and you can play with the output, and the max memory should you need to. 

The macro does pass a number of arguments to soapUI, which are explained on their website. Of these we only use:
  • a to ensure verboseness in the output of errors
  • j to tell soapui to generate the xml which gets turned into the html report
  • s to tell it the name of the test suite to run
  • f to say where to put the outputs
Optionally, if you have any properties which are used in your tests (in our case we had a jdbc connection string as a property) you can get ant to pass these in, that way you can externalise these for running on different environments without the need to change your test scripts. These can be set for the developers environment within the soapUI project, and then ant will merely overwrite them on the continuous integration environment - cool huh?

Finally there's some tidy up in renaming the outputs to ensure that two project files with the same test suite name (e.g. Full Test) won't overwrite each other.

Reports 

The reports target is very simple, which is largely because it uses the standard junitreport functionality, and because most of the work I did here was hacking the xslt to be more soapUI friendly (describing that is too big a job for today). 

Essentially it grabs the xml files for all the test suites run since last clean (each renamed to be unique by the runTest macro above) and runs the junit-noframes.xsl on them. Unlike the standard ant one, the xsl now has three parameters:
  • runDateTime (because unlike java SoapUI doesn't include this in the output xml)
  • resultsFile - which means the html can have some text to say which zip contains the full logs
  • title - the title of the html report
Finally, all the logs are zipped into a file in the reports folder, and the html is moved from the temp folder to the reports folder, as it would be a shame for the tidyup job to delete it after we've only just created it.

Licensing

...and finally: 

The junit-noframes.xsl is under the apache 2.0 license - and my updates to it fall under the same.
The custom build.xml is open source under the GPLv3 license - hope it's useful.