Monday, September 22, 2014

The roaming dinosaur series, Episode 1 : Using Self-Signed Certificates to secure Node.js HTTP traffic.

Node.js for the enterprise !

Introduction

Once I was annoyed by a novice question regarding how my phone worked, shared that thought with a friend at work who smiled kindly and reminded me that "We are all a bunch of dinosaurs roaming the mobile land", I do tend to disagree as most of the concepts we are dealing with today are actually not new at all, but I digress.

So, as someone who considers himself a master of the old world of legacy web apps I decided to explore how the new kid on the block (Node.js) respond to my Mesozoic Era standards (that is the era when dinosaurs lived if you are wondering).

The first of this series is going to go through a quick configuration to run Node.js over HTTPS using self signed Cert, but first let us understand why do we want to do that (there are good reasons for doing that , trust me).

DMZ, SSL and those nasty security checks

A few years back I worked with a brilliant (also neurotic) application server administrator that was adamant about turning off all HTTP traffic to his application servers farm.
"Only HTTPS is allowed" , "HTTP is for welcome page, go get that 'bleeb' of a web server not from my App Server".

He was right,  this is the typical layout of an Enterprise Application server



Using firewalls to protect enterprise application server.

A potential hacker will have no access to any of application servers directly but rather has to go to perimeter we call DMZ zone, If by any chance he gained access to any of the servers in that zone he must not have any 'powerful tools' under his command. which will leave him stranded until the security team discover and deal with the breach.

This is why the DMZ zone will have computers with very limited abilities, no JVM's, and definitely no Node.js, a proxy gateway like Nginx or even a simple HTTP server like Apache would be safe to put there. those servers are usually 'hardened' i.e. only minimal required software is installed and only minimal required ports are open, so no telnet and some extreme cases no ssh even, you have to physically be in the room to upload files into this machine.

But there is something wrong with this picture, the fact that the Traffic between Nginx and the Node.js servers, the fact it is done in HTTP is a security breach.

Take a close look at the problem


It is the 'Admin' problem, any administrator of those machines can install a network sniffer (wire shark anyone !), and voila , he has access to all unencrypted data going in and out of the servers, that is user names, birth dates, addresses, social insurance numbers, account numbers , anything that the user does submit that is not encrypted.

I have seen many customers forgo using HTTPS internally siting the data is in the "Trusted Zone", I quickly ask the question my security mentor asked me once "Is your company policy such that your server administrator can view the birth dates and social insurance numbers of all your employees" ? while they figure out the answer to this question, we start configuring HTTPS.

Unlike external HTTPS, internal HTTPS does not require a publicly signed and trusted certificates (those are costly and are not easy to obtain). for internal it is sufficient to use self signed certificates, the ones that administrators can create easily and renew at well.

Now you understand the method behind my madness, Node.js should run always in HTTPS and if possible exclusively on HTTPS  , So make it a good practice to configure Node.js with HTTPS all the time after all, it will only take you 5 minutes as you see below.

Configuring Node.js for HTTPS

HTTPS is HTTP over SSL, explaining SSL is beyond the scope of this post and to be honest only one man explained to me well 10 years ago (that was the mentor of my security mentor), and I am always amazed at how little understood such a widely used concept.

So in order to configure HTTPS you will need a pair, of keys that is, it is actually a Key and a certificate, the certificate is what your server will present to browsers, if the client (or user) choose to trust your certificate (i.e. trust your server) then they will upload the public key from your site into their end and use that key to encrypt messages that only your site can decrypt using the private key.

In this sample I used the name 'klf' as my organization name to configure my keys so you can use whatever your project name is, I am also using openssl which is an open tool to generate keys, I find it much easier to use for Node.js , when it comes Java servers keytool is a more suitable tool.

 1 - Create the Key pair 

First you create your private key this way.

openssl genrsa -out klf-key.pem

2 - Create a signature request

openssl req -new -key klf-key.pem -out klf-csr.pem

This command will ask you a few questions to identify your server (you can see my sample replies in the following screen shot), note I used the password 'passw0rd' I am hoping you have more sense than to do that :).



3 - Self sign that request 

openssl x509 -req -in klf-csr.pem -signkey klf-key.pem -out klf-cert.pem


4 - Export the PFX file 

Now that you have that self signed certificate , you will need to export the PFX file that you can use in your Node.js to start your HTTPS server

openssl pkcs12 -export -in klf-cert.pem -inkey klf-key.pem  -out klf_pfx.pfx


You will be asked to specify your pfx password (I used passw0rd again to make writing this post simple, please use something else !), this will be needed by your Node.js code as you will see in the next step.


5 - Start your Node.js server

Here is the code snippets necessary to start your Node.js (I am using Express here  , and this is not all the code, this is just what you need to modify in your express(1) app.js

app.set('port', process.env.PORT || 3000);

app.set('ssl_port', process.env.SPORT || 3443);
var https = require('https');
var fs = require ('fs');
     var options = {
     pfx : fs.readFileSync('ssl/klf.pfx'),
     passphrase : 'passw0rd',
     requestCert : false
     }  ;
https.createServer(options,app).listen(app.get('ssl_sport'), function(){
console.log('Express server listening on SSL' + app.get('sport'));

}); 

6 - Start the server and test from a browser 

Next you need to access your server using https://localhost:3443 .
Your browser will likely warn you against the certificate (because the browser does not see any signing authority on it) so tell the browser to trust it.

You can click 'Show Certificate' and you will see that Node.js is presenting you with the certificate that you have self signed


Conclusion

Using Nginx or any other HTTP router to terminate SSL request that will be using a publicly signed certificate and initiate a second SSL request from the DMZ to the Node.js in the corporate trusted zone is a good practice for on premise enterprise Node.js applications.


Sunday, April 20, 2014

Enterprise IT migrations and/or transformation challenges

 Introduction

In my first blog entries I discussed the brave new ways of building mobile applications and in specific the use of cloud hosted technologies. 

As IT departments scramble shifting to Mobile/Cloud/Analytics technologies  and Dev-ops/Agile methodologies, it is very important to keep an eye on how to approach existing ecosystem with these changes, it is a great opportunity to revitalize IT teams, and they must become partners in the movement instead of being dragged along for the rough ride.

From Migration to Transformation.


For the past few decades and since the introduction of the PC, Change has been the only constant in IT, and it did come in Waves, Each wave provides quick cycles of change at first, then it matures, the cycles of change slow down until another wave hits and we repeat.

The PC was the first wave, the 'Killer App's coming fast and furious (remember Word Star ?), Once DOS matured and slowed down the GUI wave came, Followed by the 'Internet' wave and its sibling, the "Internet Application" wave and once those matured and the cycle of development slowed down, the Mobile App wave came.

The mobile App wave is still raging with cycles of change, expect this to cool down sometime in the near future only to be followed by the Internet of things , wearable gear waves, apps every where waves.

 Every wave 'cycle' brings with it a 'Migration',  Enterprises know very well how painful those have been and can still be in the future, but they also know the benefits of them, and the danger of not doing it at the right time.

I have seen 4 types of Migrations and/or Transformations in enterprises in the past decades.

Release Migration

This type of migration happens fast and often specially in the early days of a wave . Think J2EE release frequency in the early days of the 'online application' wave. (before it was JAVA2EE)  or think the insane amount of frameworks and languages for building mobile applications and dev-ops in the current mobile application wave.

Competitive Migration

Ahh , those were fun and they are usually heated at the early days of a wave where technology providers jostled for space in the emerging market, I do have fond memories of being the 'Websphere guy' in BEA weblogic (then) environments.

Technology Migration

This type of migration happens just before the previous two usually but it happens much less often (And we should thank the IT Gods for that).
They are usually chaotic, disruptive and painful. with many jobs lost, new jobs created and it carries a shift in IT department culture along (more on that below).
There was a threshold where mainframe 'screen scraping' just did not cut it and that good old trusted main frame application was going to be rewritten in JAVA (horror of all horrors).  people like myself try to blog easing the pain of people moving from one technology to another, but no matter what, there will be causalities on both the department and personal levels.

Methodology Change

Methodology change is what happens when we moved from waterfall to Agile or when gigantic corporate like wall-mart moves to dev-ops.
These changes are equally painful to technology migrations and usually are a result of such moves.
The hallmark of Methodology change is "Resistance", it is almost impossible to do these changes by just issuing the 'top-down' commands, leadership from behind is a must, and building grassroots movement to support such change is key.

One of the best books I enjoyed reading that discuss only this type of change was Succeeding with Agile . it is a great read not on 'Agile' itself but rather on the group psychology of change and how it does impact organizations.

Key challenges for successful change.

From my work in many migrations through the years, if I am to choose the key challenges that IT managers need to keep an eye on it will be

Resistance to change 

This is just human nature, we are creatures of habit, and it professionals (the really good ones that you want to keep happy) identify on a personal level with their work, and change brings with it all types of insecurities and vulnerabilities.

Need for grass root 

this will go hand in hand with the resistance to change, the bigger the change , the more we will need to build grass roots, introduce change gradually and generally lead from behind.

Operation impact 

This is not "Operation Migration" but rather  the impact of this migration on the development/operation ecosystem. factors like bringing outside consultants to help the migration or promoting (related to grassroots factors above).

Skill gaps 

it is a fact of life, every change has a skill gap , not all skill gaps are created equal and to complicate matters , not all IT professionals respond to skill gaps equally.


Conclusion

There had never been a greater need for grassroots support and 'push from behind' type of migrations than these days, with modern technologies , the shift to cloud based and outsourcing of IT services, the focus is slowly shifting back to development and developers. and any IT migration and transformation process in the mobile/cloud/analytics era must take that into account.



IIB service development, pitfalls to avoid for new JAX-WS migrants.

DISCLAIMER : The following article discusses the behaviour of IIB 9.x , WMB 8.xx and prior releases, future IIB versions may (and most likely will) alter this behaviour. These are my personal views and do not represent IBM official position.

Introduction 

The more I work with IIB (and WMB), the more I feel it is the best ESB solution available from IBM and arguably from any provider (my bias notwithstanding), Developers moving from WESB to IIB will be pleased with the powerful range of features available and with the speed and ease of development.
However, a deeper understanding that the roots of IIB in MQ does make it a different environment than WESB which was a J2EE implementation, once you take that into consideration, migrating projects from WESB to IIB should be a much easier task.

The following blog entry should help you avoid some of the most common pitfalls and make your journey into WS development in IIB a lot more friendly.

You can find a quick tutorial on how to build IIB web services in my Worklight to IIB series here 

Use ESQL SET  with caution.

The order of elements in an XSD is important for XSD validation, breaking the order will lead to 'invalid xml'.

Take a look at this XSD and note the order of the elements.



When using the ESQL content assist you will see what you expect , the XSD is now part of the SOAP and XMLNC parsers.


The first thing to notice here is that content assist does not display elements in the same order as the XSD (this is a sign of things to come !).

In the code snippet above you also see the use of 'REFERENCE' which is a common technique to write neater source  ( and arguably faster  running code ).

Now we run the code and examine the resulting XML.
 

Now you get the picture, follow both the green arrows and the blue arrows and you will discover that the order of the XML output actually DEPENDS on the order of your code execution.

 That SOAP message that the code above generates is INVALID.

Conclusion.

  • Your code must be written in the same order as the Original XSD.
  • As you seen earlier the content assist is not in the right order so do not be fooled by that.
  • If you plan to populate an XSD model through multiple steps (by using Environment or LocalEnvironment), you must make sure that all your code pieces execute in the same order every time.
  • To guard against such mistakes and for peace of mind, I do highly recommend using a 'validate' node before you return your code (performance implications not withstanding).

Avoid the namespace mayhem.

"Mayhem : A state or situation of great confusion, disorder, trouble or destruction; chaos. " Wiktionary.

It is common for a single WSDL or even a single web service operation to have multiple namespaces in the sample I have here have two name spaces
  • Service elements : "http://KLFSamples.ESB.WL/service/"
  • Data elements     : "http://www.example.org/DB_Data"
Now look at this simple ESQL code that sets the SOAP response body to fixed values.


NOTE : The use of 'Reference' is highly recommended but you can not use a 'REFERENCE' until an element is created so a 'set' to an element in 'Person' object has to happen before I can reference it.  Personally I think it is one of those bugs that became a 'feature' as time went on. but I digress.

This previous code looks straight forward, but the XML generated in the SOAP message would carry a real "interesting" surprise to those used to  JAX-RPC and JAX-WS.

It seems that IIB 9.0 (Have not tested IIB10 yet) and Previous versions of WMB  have their own peculiar  way of adding and duplicating the same name space in multiple places.

The problem gets worse as the same 'NSx' could be defined with two different values in different places in the same SOAP message but within different hierarchical structures. 

So far, this is just an 'Ugly' message but not 'invalid'. But the problem will compound if you are actually writing into the input of a SOAPRequest node, and if that node is making a call into another IIB (or WMB) service those unexpected namespaces become impeded in the elements as they travel through the IIB engine, the reply from that SOAPRequest node will for sure carry an invalid XML and will probably break your flow. (do not ask me how I know ).

Solution 

Thanks to my friend and WMB Guru in ISSW Scott Rippley, the answer lies in the developer enumerating manually all Namespaces in their SOAP message, and inserting them all manually into the XMLNC or SOAP domain, thus forcing the parsers to use them and cleaning up output.


 The resulting SOAP message now will look like what you would have expected originally.

Technical Summary

  • Keep track of all the name spaces used in your code abbreviated  by  NSxx as you use the ESQL content assist.
  • Keep in mind those namespaces may span multiple files.
  • Insert all of them using your own naming convention (NSyy) at the top of your SOAP message using the NamespaceDecl 
  • Feel free to read more about it in the Infocenter topic here
  • Remember to declare all your namespaces at the top of your code because .. well.. Order is important ! right ?.