Legacy to API Led Connectivity with Mulesoft

When building out green field migrations from existing legacy code to Mule soft, a bottom up approach is recommended for creating new integration applications. An ESB such as Mule ESB is an important enabler in this type of a migration facilitating the creation and orchestration of services without the need for an application server or other infrastructure components. Given that, migrations to this type of architecture is a two-step process – the first step involves creation of endpoints to expose business logic as services wrapped inside Mule flows. The next step is to create service compositions that represent the actual business processes. These steps lay the foundation for a ‘System API’ layer which connects to the various SOR’s hosting the most important asset of an organization – namely the data – enabling access from well-defined API’s. While building out these integrations, developers need to ensure some foundational design principles are followed.

  1. Separate implementation logic from routing and orchestration logic – for example flows which handle orchestration should not have any business or implementation logic.
  2. Break down longer flows into shorter sub-flows – use the single responsibility pattern as a guiding principle. Extract any reusable code into sub-flows or if applicable into separate Mule soft components. 
  3. Domain data that is being sent between components should be stored in the message payload (as XML or POJO’s). Moreover, try to adopt a canonical data model for your integrations – a common format to which all message payloads are transformed to prior to any further processing by Mule. Such an approach decouples the internal representation carried in the Mule Message from that which is consumed from connectors such as Salesforce thereby making it easier in the future to migrate to another external system if the need arises.
  4. Separate core business logic from cross cutting concern such as logging and security.
  5. Start building simple services to begin with – then build complex orchestration on top of them.
  6. Avoid point to point integrations
  7. Leverage both synchronous as well as asynchronous integrations – synchronous integrations give more feedback but asynchronous integrations work faster and better when integrating with slower services.
  8. Leverage flows for interactive and batch processes to maximize throughput in cases where latency doesn’t matter. 
  9. Build robust error handling, anticipate possible errors in data, end point systems and connectivity. 
  10. Leverage the concept of service virtualization to abstract physical services through a proxy of intermediary service. This hides the physical location of the service provider and decouples the service consumer from the service provider. This also prevents disruptions when changes are made to a service provider allowing the service consumer to continue interacting with the service intermediary without being impacted.

     One aspect which is noticeably missing from this implementation is an approach to secure these   API’s. Some organizations opt to implement security within the API itself, using an approach such as HTTPS Basic Authentication. As the number of services grows however, how do you manage a situation where security for all these services needs to be augmented to change in some way? These kinds of scenarios are better handled by delegating authentication and authorization concerns to a central service which routes calls to the correct service.
     







  1.  








  1.  


·        

Comments

Post a Comment

Popular posts from this blog

Anypoint MQ access using POSTMAN

Publish – Subscribe Messaging with Anypoint MQ