Friday, October 17, 2014

Big Data and Digital Businesses

It's not secret that "Big Data" is (slowly, but) surely moving forward from being a IT buzzword to a mainstream adoption in organizations of all sizes. 

Almost all mature and progressive software manufacturers are focussing on building solutions that encompass holistic integrations capabilities, keeping into consideration convergence of SMAC, and it's impact on modern businesses. Just like there is hardly any software vendor that doesn't offer on-cloud or on-demand offering these days, big data & analytics capabilities are driving lot of business stakeholders in decision making and forcing software vendors to enhance their offerings.

More and more modern businesses are taking the digital route. Focus on customer touch-points has never been so intense and not to mention expensive, in the history of customer lifecycle management. Multi-channel commerce is taking a new shape with maturity of 'internet of connected devices', and is pushing tons & tons of new data for ingestion every minute.

There are multiple reasons and events that could be attributed to new found fame and popularity of Big Data and Analytics, a few on top of my head are:
  • Inexpensive primary storage
  • Groundbreaking enhancements in in-memory grid computing and columnar compression technology in DB
  • Cloud Computing moving more and more data away from on-premise model
  • Ever increasing number of mobile subscribers globally
  • Seamless connectivity and network availability et al

Everything said & done, Big Data success is NOT a matter of luck. In fact I personally believe Big Data success is probably one of those few elite IT initiatives that requires a lot of top-bottom push coming directly from CxO for an enterprise wide adoption and support.

The publication "Big Data as a Service", talks about couple of very possible use-cases of the future and also touch upon how there are multiple tangible and intangible dimensions associated with Big Data.

This right now is the age of networking where it is no longer important what the underlying platform for hosting the data is or what MW tools we use to transfer the data. It's an age where data is the true king, not applications or platforms. Players like Salesforce and SugarCRM that have built an entire ecosystem in the cloud are forcing customers to move their attention away from the complexities of platforms, applications and maintenance towards data. We are already on the verge of moving away from "application based enterprises" to "data driven enterprises." As long as customers know what data it needs to draw conclusions and decisions to help grow the business, they are empowered by the cloud solutions to make informed and educated decisions.
The quantifications of data based on the volume, velocity, complexity and variety of the data are primarily the most commonly discussed dimensions of Big Data circumventing information management. However, as we go deeper into Big Data, we are more than likely to discover that the conversion process "from data to information" traverses through multiple complexities and dimensions.

Most critical items that large and mid-sized corporations should keep in mind while working on enterprise wide Big Data initiative are:
  1. (Big) Data is everyone's business. From LOB heads to EA, Data architects and business analysts - everyone is expected to add value
  2. Don't ignore Data Governance
  3. Don't ignore Data Security & access control
  4. Breakdown operation inefficiencies and build logical tiers for seamless data flow from data source to data consumer.
  5. Don't go Big Bang, at the same time don't be solely opportunistic. Find a balance, try to push for Big Data, while being practical within corporate boundaries. 

At the end of the day Sales Operations, Digital Marketing, Customer Support, R&D and even the back-office operations are all craving for right & timely insights. And today is as good as any other day to start thinking about data at the front-end of the corporate initiatives, and let the corporate programs be data driven rather than simply being functionality and capability driven.

Friday, July 4, 2014

Aligning Agile with TOGAF EA framework

This is a continuation of my series on aligning EA with Agile/Scrum frameworks. The first chapter of this series could be found at Aligning Agile with Zachman EA framework. In the sections below I am trying to showcase one of countless possible ways to align TOGAF with Agile/Scrum - with objective of trying to showcase the flexibility and adaptability of EA frameworks and applicability to the modern enterprises.

Again, as a disclaimer from previous chapter in the series - without getting into religious debate about whether EA should report into CEO or CIO, I would like to focus on a simple objective of having/building an EA practice. In layman terms - Enterprise Architecture is the formal practice/group/team to align the business strategy with the IT spending, and Enterprise Architects achieve this goal by collaborating and working closely with number of stakeholders across the board on business, application, PMO and QA side of the house. In my experience I have partnered with PMO and Portfolio managers to help lay down an iterative SDLC model, with a foundation build upon Agile principles.

Aligning TOGAF with Scrum

TOGAF is the newer of the EA frameworks, however in last decade TOGAF9 has gained a great momentum and very high rate of acceptance throughout IS&T organizations universally - mostly attributed to it's flexibility and adaptability. 

TOGAF is just a guidance, and the user group can actually custom-build it to suit it's business capability model. TOGAF also provides guidance in shapes of ADM, III-RM, TRM et al. Again, a testament of the adaptability and universality of good EA frameworks. 

The view below is representation of the TOGAF Architecture Development Methodology broken in logical demarcations. 

If you study the TOGAF ADM closely you'll notice how ADM provides structure and framework to the whole project and portfolio management. ADM aligns the SDLC in a mature repeatable sequence of events, and sets high level gates (milestones) to help associate each phase with the adjoining phases. 

Let's breakdown the ADM phases in layman terms to establish clearer understanding with regards to objectives from each ADM phase here:-

Phase A - Architecture Vision
Develop a high-level aspirational vision of the capabilities and business value to be delivered as a result of the proposed enterprise architecture

Phase B - Business Architecture
Develop the Target Business Architecture describing how the enterprise needs to operate to achieve the business goals,responds to the strategic drivers set out in the Architecture Vision, and addresses the stakeholder concerns

Phase C - Information Systems Architecture
Develop the Target IS Architecture that enables the Business Architecture and the Architecture Vision, while addressing the stakeholder concerns

Phase D - Application Architecture
Identify candidate Architecture Roadmap components based upon gaps between the Baseline and Target Application Architectures

Phase E - Opportunities & Solutions
Generate the initial complete version of the Architecture Roadmap, based upon the gap analysis and candidate Architecture Roadmap components from Phases B, C,and D

Phase F - Migration Planning
Ensure that the Implementation and Migration Plan is coordinated with the enterprise's approach to managing and implementing change in the enterprise's overall change portfolio

The fact that ADM revolves iteratively around the business requirements, and repeats logically after completion of each iteration is testament of it's alignment with Scrum methodologies and it's close alignment with the business objectives and goals of an enterprise/organization at fundamental level

Laying these phases in sequential order, keeping in mind the high level objectives of each phase and also considering agile/scrum methodologies on the other side of the thought process - one could lay a representation as marked below. In the view below, i.e. FIG.1 - TOGAF aligned with the Scrum methodology for SDLC, I have showcased how TOGAF is aligned with Agile/Scrum in a top-bottom representation. 

FIG.1 - TOGAF aligned with the Scrum methodology for SDLC

Notice how the view above showcases gates/milestones - indicating the handshake opportunities/placeholders for various stakeholders involved in the end-to-end SDLC, moving the project responsibilities between teams/stakeholders. 

The gates could be used to mark out clear demarcation and also in setting protocols and standards for information exchange between stakeholders representing different groups/units.

This same model could be extended, or taken a step further even to the Phase H of the ADM, i.e. Architecture Change Management. This could be applicable in cases when the original architecture solutions could not be applied/conformed to or change in original requirements or in case of lot of defects during UAT leading to re-do or most commonly, taking a backlog of the unfinished used stories and re-prioritizing them in the next sprint.

Aligning Zachman and TOGAF together with Agile/Scrum

Overlapping this TOGAF/Scrum representation above with the Zachman/Scrum model one could easily link the 2 and see the close association between the 2 models.

The output from gate 1 and gate 2, i.e. Corporate Strategy and Technology alignment phases, evidently become inputs for the tactical plans in shape of user stories/epics as answers to - when, who, how, what, where and why, all addressed during the earlier phases. Please see the FIG.2 - SDLC Gates based upon the Zachman, TOGAF and Agile/Scrum Alignment for better clarity.

FIG.2 - SDLC Gates based upon the Zachman, TOGAF and Agile/Scrum Alignment


Zachman is a true example of an EA framework which has stood the test of time and has been adopted by 1000s of organizations worldwide. Adaptability and universal applicability of Zachman makes it stand atop all EA frameworks, testament to the flexibility of the mature EA framework.

TOGAF on the other hand is inherently flexible since it just guides the EA practitioners with guidelines and methodologies, leaving the implementation details to the EAP. If anything, it just adds to the agility of the practicing organization and provides means of progressive incremental maturity. 

Hopefully, this should help the EA skeptics understand how easily an EA frameworks can align with Scrum/Agile methodologies and enhance productivity and agility of the SDLC. It should help skeptics realize that one can tweak and morph the EA framework to meet the business needs and IT expectations. 

In the end I would like to sum it up by giving an analogy - 
Building a large, complex, enterprise-wide information system for the modern enterprises without an EA framework is like trying to build a modern cosmopolitan city without a city planner or a blueprint. 
Can you build a city without a blueprint? Probably Yes.  
However, would you want to live in such a city? Probably NOT.

Enterprise Architecture is simply indispensable for the modern enterprises. 


Disclaimer - The reader is expected to have basic knowledge of TOGAF, ADM and Agile. Also, please note that there are many more EA models/frameworks and the value chain could also be represented potentially in dozens of different ways. That being said, Agile and EA could be aligned in multiple different ways - owing to flexibility offered by both, as long as the using organization knows what it's looking for.

REF-1: The Zachman Model


Wednesday, May 28, 2014

Aligning Agile with Zachman EA framework

More often than one would like to, I get to read online and come across misguided grievances from seasoned IT leaders (and surprisingly enough NOT so much from the business side of the house), on online private/public forums and otherwise also that EA is good only for organizations that have established businesses and lots of money to spend on IT. Some of the most commonly used phrases that I have come across in last 3-4years - 

"EA and Business Agility are oxymoron"

"Enterprise Architecture is rigid and inflexible"

"Enterprise Architecture makes the SDLC process heavy and dense"

"Enterprise Architecture works only for the companies that run on waterfall SDLC model"

However, contrary to the fairly common misguided beliefs EA is actually critical for progressive businesses and is often deep rooted in organizations with rather frugal IS&T. If anything, EA makes SDLC more efficient and agile - when applied correctly.

Without getting into religious debate about whether EA should report into CEO or CIO, I would like to focus on a simple objective of having/building an EA practice. In layman terms - Enterprise Architecture is the formal practice/group/team to align the business strategy with the IT spending, and Enterprise Architects achieve this goal by collaborating and working closely with number of stakeholders across the board on business, application, PMO and QA side of the house. In my experience I have partnered with PMO and Portfolio managers to help lay down an iterative SDLC model, with a foundation build upon Agile principles.

Zachman, TOGAF, FEA, Oracle EA (thinly based on TOGAF), and Gartner Meta Model frameworks are possibly the most famous EA frameworks in the market right now. However, for the purposes of indicating the alignment between EA and agile SDLC, this blog series focuses only on TOGAF and Zachman models.

Aligning Zachman with Scrum
Zachman framework is probably the first formal EA model that provided users with a robust framework that could be custom-built to fit various LOBs and domains. The Zachman "Framework" is actually taxonomy for organizing architectural artifacts that take into account both - who the artifact targets and what particular issue is being addressed. Though Zachman model was originally envisioned for a SCM, but over a period of last 40+ years since it's existence it has been used in innumerable scenarios and LOBs outside the SCM industry, which is a testament of the robustness and universality of the EA frameworks. 

In simple terms Zachman is a 6x6 matrix with taxonomy from 4 different consumers views - Enterprise Names, Model, Audience, Artifact Classification. For the purposes of showcasing alignment of Zachman with Agile, I would focus completely on the Enterprise names and Classification names and I would try and mould the original model in a newer model that aligns easily with the Agile principles. 

In order to align and rhyme with the Agile theme, I have re-ordered the classification names (Zachman does NOT tie the user to itself rigidly) in the revised model.

In the revised Zachman picture below, the original Enterprise names have been revised in an attempt to help align the model closely with geographically and physically spread modern enterprises.

  • 'Event' here indicates the trigger or alarm condition for the the business flow or process to begin.
  • 'Capability' reflects upon the business rules and principles encompassing the asset or action to be taken 
  • 'Function' could be auto or manual or a combination to define course of action.
  • 'Asset' could be an object, piece of information, ER et al. It could be anything that business considers as an asset. 
  • 'Location' could be on cloud or on-premise or hybrid, location of the asset on which the action is to be taken.
  • 'Reason' should be included for tracking purposes so that we could establish why a change was made or suggested.

From here, we could dwell a level deeper and see how the revised Zachman could be plugged into the scrum SDLC for defining each user story. The section highlighted in Yellow above is representing a scrum user story at operational level, capturing the details in a pre-defined template/format. When this revised Zachman model is plugged into the Agile [REF-2] SDLC, organization could benefit immediately/earliest in shape of - 
  • primarily, every user story would not have a story/theme
  • it would be able to provide consistency in capturing requirements in the right details for all the projects, 
  • it would also be able to provide just the right amounts of details at the front end of the process,
  • it would enable the solution and technical engineers to work in close collaboration with the business analysts during the conceptualization phase,
  • this approach might also help in reducing the backlogs between sprints.
  • finally, this is extensible solution and can easily be modified to align with any change(s) in business and/or EA guidelines.

On the flip side - Agile being an umbrella for multiple known methodologies, i.e. Scrum, Kanban, Scrumban etc, I believe it could be moulded to meet the organization requirements and just like we saw how Zachman framework could be custom-built to suit to the organizational expectations and culture.

Here we have covered just the user-story aspect of the agile process (aligned with Zachman model), and in the next part of this blog series I am going to showcase how one could make the overall SDLC process iterative and align TOGAF with Agile SDLC, at EPICS and Sprint level.


Disclaimer - Please note, there are many more EA models/frameworks and the value chain could also be represented potentially in dozens of different ways. That being said, Agile and EA could be aligned in multiple different ways - owing to flexibility offered by both, as long as the using organization knows what it's looking for.

REF-1: The Zachman Model

Sunday, May 19, 2013

Service Oriented Architecture (SOA) Offerings, Comparison and Evaluation

SOA suites available in market are like any other COTS products-one size does NOT fit all. To avoid a 'square peg in a round hole' situation it is very important for the solution and enterprise architects to make sure that the chosen SOA stack meets the current needs, the future vision and helps align IT with the overall business road-map.

Publication SOA Stack Comparison, first published for/by ServiceTechMag, does a deep dive analysis and comparison of the major and most competitive proprietary SOA Stacks in the market today. 

Excerpts from the publication - 
"... It is very important for all prospective SOA adopters to know that SOA is not a turn-key project. In fact, SOA is a discipline, a long-term commitment of collaboration between business and IT and an attitude overall. There are multiple technology suites, both proprietary and open source, available on the market that facilitate SOA journey and help organizations achieve overall IT maturity-however, a well-versed team of solution and enterprise architects is required to find the solution that best meets the enterprise vision and requirements....."

...While evaluating a stack, one must also consider the out-of-the-box support for messaging, transport protocols and different document types that are supported by the chosen SOA stack.

As I mentioned in the beginning of the article, SOA and cloud are both complementary enterprise solutions and incidentally have been strategically promoted by both Oracle and IBM. One must also carefully evaluate the support for cloud and mobile applications when considering a SOA stack. The support for multiple cloud deployment topologies is very critical for middleware platforms for enterprises of the future. Oracle and IBM both have a very strong presence in cloud, and it was a rather seamless movement for both to take SOA on cloud. SAG, on the other hand, started its journey for mobile just a couple of years back, and forayed into SOA on cloud with its 8.x release of wM.

Code version management is another pivotal area that every business-driven EA team needs to keep in mind when deciding on integration suite. The SOA centric code management tool could go a long way for an enterprise in terms of lower TCO and higher ROI. I have used CVS and IBM ClearCase for Oracle SOA Suite, both source code and config files. I personally feel that SAG webMethods has the weakest code management support within its stack, probably because code is saved primarily as config files. CrossVista does provide the wM users with a ray of hope with code and release management..."

I personally believe that one could add so much more to this comparison matrix, but that's very subjective and I would recommend the prospective buyer to use this article as a guide to do complete due diligence with prototype and modeling before investing into any MW solution.

Monday, August 13, 2012

Enterprise Data Architecture, SOA and Data Services

Decades of R&D and growth around EAI, ETL, MDM and SOA has led us to one conclusion alone – data matters. It has always been about data. I guess it’s no secret either that content is the king of consumer web and data is the king of enterprise software. 

Interestingly enterprise data requirements are fundamentally too complex and too closely driven by the high-volume, mutli-dimensional nature of BI systems to entirely be serviced from a messaging layer alone. Unfortunately, the data universe of DW, BI and MDM still continues to live outside the Enterprise SOA Vision/Road-map which significantly dents the overall ROI for IT/SOA expenditure.

Solution Architects (most often under corporate obligations) often fail to notice that a significant chunk of the top-line business value of SOA resides in the data that SOA suite pushes between applications/service components. SOA can efficiently orchestrate ETL/ELT module to process bulk data, batch jobs or file transfer in near real-time. SOA stack can also integrate the BI modules to embed BI metrics within BPEL workflows, generate business events from BI alerts, embed analytics in business rules and hence helps take smart real-time decisions.

SOA plays a significant role in MDM and Information Architecture roadmap for any big/small enterprise. Essentially within the walls of Information Architecture, SOA tier plays a pivotal role for Data Federation encompassing data massaging, validations, cleansing, consolidation etc. However the inherent inefficiencies of XML, associated with large data handling, and the fact that almost all SOA suites are built upon Java containers – it necessitates the use of a highly optimized caching server within the MW tier to Federate IA using SOA. This added setup/maintenance cost of Caching server basically veers us to utilize SOA Data Service alongside Data Federation Tier.

SOA Data Services are not just services operating on XML. Instead SOA Data Services are enterprise data management end-points that expose highly optimized engines for working on all types of data. Data service as a concept predates SOA, in fact it dates back to time when B2B ecommerce and EDI were gaining momentum in the business space in early 1970s. 

Technically, a data service should exhibit 2 or more of following attributes – 
  • Contract based routing, 
  • Declarative API, 
  • Data encapsulation, 
  • Data abstraction and 
  • Service metadata. 
A close look at above mentioned attributes could show the direct relation between Data Services and basic pillars of SOA. However we should never confuse SOA with unified data tier. 

The great value of latest, up-to-date and trusted information is more evident at present times than ever before. “The two second advantage”[REF-1] is indeed a game changer. Apparently I have worked closely on enterprise architectures where SOA was employed as means of integration between ETL, BI, MDM and Web Applications. By building the enterprise components strategically and then by creating elegant unified integration solutions using SOA, enterprises more likely to achieve the following objectives;
  • reduce the TCO for data (MDM), 
  • increase agility (hot plug-gable SOA), 
  • improve performance (real-time batch/DW), 
  • reduce risk (integrated human workflow) and 
  • improve business insight (BI)

Mature Data Integration Architecture is the foundation of strong EA. Success or failure of application integration and SOA composites is directly related to the maturity of data integration tier.

Enterprise level Data Services can be broadly grouped into following categories;

  • Master Data Services can form a trusted single point of reference during real-time or batch data movements. 
  • Batch Data Services can work closely with BPEL/ESB so that the point of control for invoking/running ETL remains inside the SOA tier, just like classic delegation design pattern providing architectural loose coupling. 
  • Data Quality Services are typically recommended to be used inline with other data services, else statically directly with data source. These data services use pre-defined rules and algorithms to clean, reformat, de-dupe the data. 
  • Data Transformation Services are classic data services used primarily for swapping formats of data to meet the downstream system requirements. These data services require a very critical view at the ESB and ETL tier within the EA landscape to make appropriate and architecturally efficient choice. 
  • Finally the Event Services are driven primarily by the EDA/CEP and these data services track, co-relate and propagate data on certain pre-defined triggers with the MW.
In the end I would like to recommend the approach/pointers for prospective adopters of SOA based Data Services;
  1. Start small – don’t go big-bang
  2. Use XML judiciously – explore other means
  3. Evaluate trade-offs diligently for any conflicting solutions/approaches
  4. Don’t be a victim of ‘The Accidental Architecture’
  5. Devil is in the details – take a deep dive
Irrespective of the visible and obvious integration points in existing EA landscape, one must understand that the true value for architecture tier resides in flexibility of the core infrastructure to be reconfigurable with minimal resource overhead. This re-configurability is a central characteristic of a Data Services approach, and the foundation of a successful long-term strategy for Enterprise Data Architecture. 

REF-1: The “Two second advantage” is concept coined by founder of TIBCO Vivek Ranadive'.

Sunday, July 29, 2012

ESB Deployment Patterns - Theory in Practice

Modern Enterprise Service Bus (ESB) is inherited from classic computing concept of Information Bus from 80s and ESB was glamorized in late 90’s early 2000s by technology giants TIBCO, Oracle, IBM and SAG. In last decade a lot of open source players have also forayed in the mainstream competition and produced some amazing ESB products, especially Mule, Fuse, WSO2 and Red Hat. This disruption in ESB market has only helped the end consumers and enterprises alike.

The popularity of SOA and Cloud solutions has also helped ESB penetrate deeper into the enterprise architecture landscape. Not only does ESB enhance SOA and Cloud experience but it also helps address the security related concerns to a great extent, for entire enterprise/LOB by working in a “Brokered Authentication” pattern and becoming single point of authentication & authorization. Though the mechanism varies vastly between offerings and also between enterprises, the underlying principle remains the same.

However the advancements and refinement in the ESB products in last couple of decades do not necessarily guarantee higher ROI and lower TCO for the enterprises that chose to use ESB solutions. At the end of the day it all depends on how one deploys the ESB within the EA landscape and how the EA lays down the road-map for ESB within IT organization. Please remember that ESB is just a mean to more robust EA, just an enabler.

In next sections I share my own experiences in ESB deployment topology and patterns to help the reader evaluate and compare the possible solutions for his/her use case(s). I suggest the reader to be very critical during project requirement evaluation phase and do not hesitate to use a combination of 2 or more deployment topology discussed below.

Global ESB Pattern:

  • Single namespace and canonical model for all service components interactions.
  • Easy to maintain, scale and govern
  • Easy to implement in SME.
Global ESB Deployment Pattern
  • Most suitable pattern for heterogeneous, centrally administered and globally centralized EA landscape.
  • The large number of integration in large enterprise makes this pattern a less favorable option since a single canonical model would be difficult to conceive and manage.
  • Ideal approach for this pattern would be bottom-up, however in case you already have a considerable EA landscape and ESB is new entrant in otherwise mature EA a combination of bottom-up and top-down approach, i.e. meet in the middle, should be considered.

Directly Connected ESB Pattern:

  • One of the most popular and successful deployment pattern for ESB
  • It provides means to ‘global decentralization’ and ‘local centralization’ of service components.
  • Most suitable for large organizations with geographically distributed EA landscape and/or multiple LOBs.
  • This is particularly popular and effective in cloud implementations involving hybrid deployment strategy. The hybrid cloud model would be represented by an ESB on premise and the other directly connected ESB on cloud (NOTE: Please consider the complexity related to PII and PCI compliance when working with Directly Connected ESBs in hybrid cloud model).
Directly Connected ESB Deployment Pattern
  • Common service registries could facilitate service discover-ability and also make the service governance manageable and overall architecture scalable.
  • The esb2esb communication protocol facilitates hiding the service component implementation details providing loose coupling and abstraction.
  • The esb2esb communication is associated with moderately high latency

Brokered ESB Pattern:

  • It is a direct representation of the classic HUB-SPOKE model
  • The broker selectively exposes service components to providers, consumer and partners.
  • This deployment patterns is best suited for large organizations with multiple trading partners, operations and/or platforms
  • The broker ESB regulates the communication and data/info exchange between multiple ESB installations on both sides, where each individual installation in managed and/or governed separately
Brokered ESB Deployment Pattern
  • Maintenance of service registries and repositories is complex and costly proposition.
  • Over a period of time, this pattern is most vulnerable to Spaghetti (hairball) anti-pattern.
  • Brokered pattern induces inherent latency in the business process and is NOT suitable for use case where latency is critical parameter/KPI.

Federated ESB Pattern:

  • It is the most frequently advised ESB pattern, however in my opinion it’s one of the most difficult to pull-off ESB patterns, owing to complexity associated with lateral components of governance.
  • It is good representative of a being referred to as a ‘global ESB pattern’ since it is applicable in multiple use cases, ranging from decentralized to centralized EA, from heterogeneous to homogeneous EA etc.
  • I like to refer to it as ‘Glorified one-to-many Broker ESB pattern’.
  • It is a good pattern for a use when you want to enforce integration governance from top-bottom and want to closely monitor/track interfaces, KPI and service components.
  • This pattern is a good candidate for an enterprise with multiple centrally managed LOBs
Federated ESB Deployment Pattern
  • Governance should be considered a mandatory requirement when implementing this pattern; else it could lead to ‘Blob’ and/or ‘Poltergeists’ application anti-pattern.
  • Infrastructure and capacity analysis should be done diligently to deal with the service throughput and latency related requirements (and other NFR). Inefficiently structured federated ESB tier could lead to Stovepipe systems, which could choke the federated tier.

In my professional experiences I have come to believe that every organization has unique integration scenarios, situations, challenges and requirements – requiring very pragmatic and open view towards each. I maintain that ESB (for that matter any other COTS) is not the end, but merely one of many possible means to reach the end.