The Oracle Australia and New Zealand Middleware and Technology Blog.

Friday, May 29, 2009

Enterprise Consistency


Picture (c) www.despair.com

So, let's start with a tongue-in-cheek question.

What's the difference between a political leader (substitute you own relevant PM, president etc. here) and an ECM Solution?

The answer is of course, consistency! Most people vote for their political leadership on the basis that what they promise in their pre-election program is actually going to be stuck-to whilst they are in office. Of course, we all realise soon after election that the promises made previously are rarely upheld in full and quite often - are reversed entirely.

Organisations know that a database provides consistency in the storage of structured information. If you are running an application like Siebel or eBusinessSuite (SAP if you're mad, bad, loaded and have the palette for years of customisation) you rely upon the fact that a piece of information stored in a particular field will always be there - and other users in other parts of the world who use the same information will store their data in exactly the same way. Now, imagine for a minute that the applications used by an organisation stored the same data randomly in the database with a complex algorithm to be manually followed in order to locate the information. Imagine if this information concerned the number of customers ordering a piece of kit, or the financial records for the organisation - how long would it take the accounts department to balance the books at the end of the month or year if every record of income and expenditure had to be manually retrieved from a random storage system.

CRAZY????????????

Funnily enough, there are a lot of organisation out there (and you know who you are) that manage their unstructured information is exactly this way! These organisations wouldn't consider themselves mad but I would question their sanity any day of the week. You see, things are changing rapidly in the world of enterprise information. We've all seen the slides that state how much data is being created daily and how much of it lives within databases as opposed to on file-systems etc. The thing is, it's all true and guess what - the stats show that it's only going to get worse; a LOT worse.

Think about structured data as being an animal's skeleton and unstructured information being the muscle, skin, organs and flesh. With just a skeleton, you could probably build a structure resembling closely what the animal looked like. With everything else, you bring that animal to life and allow it to live. This is the same as any organisation. Without the management of unstructured data - all you have is a skeleton, the flesh that brings life (or context!) is lost or simply REALLY, REALLY hard to find.

Organisations are being challenged to better manage their unstructured data. Here in Australia, the Federal Court has released its practice note that pretty much tells organisations that they had better get their act together else heavy fines and potentially jail-time can be placed on the organisation and its directors. ECM solutions can be really simple to implement and use effectively by an organisation. You don't need the overkill and complexity that a solution like Documentum or FileNet brings - in fact, the majority of these organisations customers only use a tiny subset of their overall functionality offered. Oracle's Unified Content and Records Management solution provides a simple to use, easily configurable, non-intrusive environment for managing unstructured information. Along with the Universal Online Archive, Image and Process Management and Information Rights Management solutions - an organisation can cover just about any requirement they have for managing what constitutes around 85% (and growing!) of their entire organisations information-set.

We are ready to sort-out an organisations enterprise information management craziness with real-world solutions for real-world problems. Consistency comes from using an ECM solution correctly - embedding the business-rules around information storage that make the location and retrieval of unstructured data as easy as for structured information.

Paul

Tuesday, May 26, 2009

Supercharge Your Applications

What do James Bond, W. O. Bentley and Shawn Fanning have in common?

Well you'll have to head to my presentation called "Supercharge Your Applications" over on slideshare to find out. They all feature in the presentation I gave recently at the InSync09 conference.

Here's a taster... Walter Owen Bentley once famously said, "There is no replacement for displacement" He proved this by taking the 'standard' 3.0 litre Bentley engine and increasing the capacity to 4.5 litres. But then he ran out of money. Just as he ran out of money, the designers that took over his work had run out of space. The engine bay of that car was only so big, they couldn't squeeze any more cylinders, any more capacity in there. So, in their quest to improve performance what did they do? They bolted on a supercharger (much to W. O.'s disgust, he hated forced induction) and took the output from 82kW to 140 kW.

And what does this all mean for you? Well, click here for the presentation to find out how you can bolt some already existing technology on to your applications to improve performance - say hello to Oracle Coherence.

-sean

Failure to implement and adhere to SOA Governance

Today we want to talk about a key reason that SOA does sometimes fail - lack of governance. I have asked Mervin Chiang from Leonardo Consulting to blog this for me. Mervin is one of Australia's leading BPM/Governance experts and has great insight into both the technical and business aspects of SOA/BPM projects. Over to you Mervin...

I have been a Business Process Management (BPM) practitioner for 5 years and in the last 2 have been working with companies to “Automate Business Processes” using Oracle Middleware technologies. You see, I try not to call it SOA, seeing that it has been getting a bad rep recently (see: SOA is dead). I see many similarities in both acronyms’ history…

The Birth – BPM, as a core concept, started very early and was not even called BPM. Someone smart (e.g. see: Scientific Management) realised that people work, and their work can be “chopped” up and studied and strung together... (A.k.a. “workflow”)

The Craze – Then we had Business Process Reengineering (BPR). “Let’s toss the organisation’s core processes in the air and see how it lands!”… “Ah, it landed well! Let’s spend money and lots of time to implement these process changes!” Meanwhile in the IT world the CIO goes… “Let’s toss our IT landscape in the air, screw on all web services in every hole and see how it lands! Then document it…” (A.k.a. “Enterprise” SOA)

The Epiphany – The statement: “Oh no! There are too many moving parts. How do we handle them?!” gave rise to BPM (keyword: Management).Much literature out there that talks about how SOA has failed or how we can save it, all point to the same realisation from the IT paradigm: SOA needs management… (A.k.a. SOA Governance)

Just as BPM, discusses the lifecycle of processes, we see the need to understand the service lifecycle. Also like processes, there are different “layers” of services (granularity). All These layers make up a service portfolio. So, how do we manage all these moving parts? In his article, Mike Kavis talks about design-time and run-time governance requirements. In addition to these two views I see the need for “management-time” or “continuous-improvement-time” governance.

At one of my recent customers, we’re using Oracle Fusion Middleware tools to achieve such an ecosystem. I call it an “ecosystem” or “platform” as this cannot be successful if only taken from a project level context (See Saul’s article on Viewing SOA as a project instead of an architecture).

Let’s start bottom-up shall we?

Oracle Service Registry (OSR) handles all the run-time knowledge of the services we have in the organisation’s landscape.

Oracle Enterprise Repository (OER) has knowledge at design-time of impact if one service, application system or business process were to change. It also talks to the OSR and has knowledge of usage to report on reusability of services.

Business Process Analysis (BPA) then uses information from the OER to build its catalogue or library of objects to aid during both design-time and continuous-improvement-time. We will use the BPA’s repository in various ways:

· To graphically represent the components of the Enterprise Architecture (EA)

· To launch process improvement, redesign and reengineering projects in alignment with the organisation’s strategic plan

· To do BPM itself

· To execute process automation projects (BPMN to BPEL, direct requirements implementation)

So, if we were to work top-down:

  1. Identify/discover requirements to improve, redesign or reengineer using BPA and start a project. BPA gives you an enterprise or cross-project view of the organisation.
  2. Analyse and (re)design your solution with the aid of BPA and OER. Run simulations in BPA, and undertand technical impacts with OER.
  3. Build and deploy your solution into OSR. The service catalogue will be synced in OER and BPA for future projects.
  4. Monitor performance to start round 2 of the lifecycle if needed (go back to point 1)

We’ve just talked about the tools that help us in this governance journey. There are also the challenges of people and process. The processes defined to do SOA governance are just as important as the people who will be carrying out these processes. In the customer example I mentioned earlier, we’ve used the publishing capabilities of BPA to communicate this to all roles involved in the business process and service lifecycle as an educational resource. We didn’t name it “governance” of course!

Ultimately, SOA governance as with anything; implementing software, BPM, building a house, getting married or having an operation, it’s the method of implementation itself and not the technology that determines success or failure. Just because I bought a shiny new scalpel doesn’t mean I am qualified to operate on you to take out your appendix!

Thanks Mervin for you valuable insights into how we should put Goverance at the core of SOA/BPM projects. However I will ensure that I don't end up at your surgery for my next operation! If you are interested in finding out more about the best practice of Governance and BPM technology you should check out Leonardo Consulting's Process Days Seminar in Sydney August 5-6.


Friday, May 15, 2009

Oracle Fusion Middleware Forum

Making IT Successful When IT Needs To Be!

The Oracle Fusion Middleware team recently held forums in Auckland (6th May), Brisbane (8th May) and Perth (12 May). These sessions were well attended, and delegates enjoyed the mix of listening to senior leaders from Oracle such as VP of Product Management, Ed Zou; interviews with some of our customers and having the chance to participate in panel discussions. The Sessions have been recorded as podcasts below for your listening pleasure.

Executive Breakfast Session
Customer Panel Discussion: Business and IT Transformation (45 min)
Michael Plon, Business Systems Manager Oil Search;
Jason Young, Managing Director SMS Technology Services BU, SMS Consulting;
Matt Wright, Product Manager Oracle Fusion Middleware APAC, Oracle Corporation


Fusion Middleware Forum
Oracle Keynote 1: Driving Business Efficiency and Expansion; Patterns for Success (38 min)
Ed Zou, Vice President Product Management, Oracle Fusion Middleware APAC, Oracle Corporation

Oracle Keynote 2: Delivering Efficiency and Expansion from the Ground Up (36 min)
Matt Wright, Product Manager Oracle Fusion Middleware APAC, Oracle Corporation

Customer Case Study Interview: Oil Search Limited (34 min)
Michael Plon, Business Systems Manager Oil Search Limited
In this Session, our Customer expert shares best practices and the pitfalls to watch out for to successfully implement Middleware solutions.

Best Practices for Your SOA Infrastructure and Projects (33 min)
Cary Dreelan, Technical Director, Groundhog Software

Extending and Integrating Applications (30 min)
Sean Hooper, Principal Consultant, Oracle Fusion Middleware ANZ, Oracle Corporation

Monday, May 11, 2009

Kiss and Tell

It's a kiss and tell SOA expose from The Red Room. The kiss is of course, 'keep it simple stupid' and I'm going to tell you all about it.

Continuing our occasional series here on the Red Room where Saul, Richard and I discuss 10 mistakes that cause SOA to fail.

One of the things I really like about SOA is the fact that it is deja-vu all over again. We've all been here before. COM/DCOM, CORBA, Object Oriented Programming, VB, Java... you name it. Central to each of these endeavours are some broadly shared concepts around abstraction, independence, reuse and so on. SOA is the next evolution of what we in IT have been trying to do since the Harvard Mark II was introduced to the Harvard Mark I. That is, keep it simple, or as described in the principles of the Agile Manifesto, "Simplicity - the art of maximizing the amount of work not done - is essential".

SOA is the latest, greatest, and indeed simplest way we've come up with to express this. There is a world of underlying complexity beneath simply booking your airline ticket or making a phone call or twittering absent mindedly from the bus stop. The complicated bits and bytes of 32 bit and 64 bit architectures. Of programming languages and compilers. Of formats and protocols and data structures. SOA allows us to simplify all that. It allows us to present to our business masters a set of capabilities that are interoperable, unbreakable, composable and reusable (thanks ZapThink).

Why do we make life so complicated? And this isn't just the IT industry, although lets face it we're pretty good at it. SOA give us the simplest way to present all the complex hard work we do it IT to the outside world. Yes there is a whole lot of iceberg under the water that is unseen but that's how it should be I think. We should be able to talk about what we do in simple terms, we should be able to make ourselves understood, or we'll all end up like 'Comic Book Guy' .

Homer: Welcome to the Internet, my friend. How can I help you?
Comic Book Guy: I'm interested in upgrading my 28.8 kilobaud Internet connection to a 1.5 megabit fiber optic T1 line. Will you be able to provide an IP router that's compatible with my token ring Ethernet LAN configuration?
Homer (staring blankly): Can I have some money now?

I'm not for a moment suggesting that implementing SOA correctly is simple. It isn't. It is a serious undertaking that needs serious planning and serious people to do it properly. But unless we focus on the simplicity instead of the complexity we're just going to make it un-necessarily hard for ourselves and our users. It's not rocket surgery after all.

Thursday, May 7, 2009

More Consolidation in the ECM Space

Hi all

As predicted a year ago, more consolidation is happening at the lower end of the ECM marketplace with today's announcement that Opentext will buy Vignette for between $300-$310m US.

This is interesting for a number of reasons (especially here in the APAC region).

Opentext bought Hummingbird a number of years ago - since then, they seem to have invested very little in integration between the two platforms. Customers have said to me that they are waiting for an announcement about whether a product they invested heavily in buying and implementing will be around in the future which gives the impression that Opentext remain undecided. Now that they have made their move on Vignette - where does that leave them in the market?

1. They will obviously gain market share simply by the Vignette customer-base. This doesn't make them stronger or give them a better offering - it just means that they have more customers to service.

2. Their product offering will become broader. Again, this isn't a sign of increasing strength - it just means that Opentext now have more capability in the web and portal space (as long as their customer and propects only need a ring-fenced solution). Vignette, albeit a player in the web and portal marketplace, don't offer their users much outside of a delivery platform or information managed by THEIR back-end. Customers who wanted to deliver information from other back-ends through the Vignette portal are generally taken down the path of expensive and extensive customisation to achieve their goals - often going down the simpler path of duplicating their information into Vignette leading them to question which content is actually correct and relevant.

3. This will give the Opentext and Vignette customers more choice. Yes it will, which one of our many document management solutions would you like to buy? Taking document-management for example, there are at least 3 solutions that you can now buy from Vignette - all of which are different, and which one will Opentext continue to push to the market (or will all-3 be supported long-term?)

Locally in the APAC market, there will be some management challenges to deal with for the consolidated organisation. At a global management level, there will be 3 sets of managers competing for position.

It will certainly be interesting to watch how this all pans out for Opentext and what decisions they make regarding strategy, management and product.

Paul

Monday, May 4, 2009

The Joys of MAA...

Oracle's Maximum Availability Architecture


Greetings All,

This is my first post for the Red Room. My name is David Centellas and I'm a field consultant specialising in Oracle Database and options with Maximum Availability Architecture and Datawarehousing on the side. I recently had the pleasure of sitting through a presentation given by Alex Gorbachev (MD of Pythian Group) at InSync '09' at the Hilton hotel in Sydney. Alex touched on some key points that many clients are asking today; mainly around Maximum Availability Architecture, Dataguard and Automatic Storage Management.

Here is a snippet in case you missed it:



One of the most interesting points that Alex pointed out was the uptake of Extended RAC into 'production' at client sites. He does go on to say it is a fairly advanced configuration; however it is working and working well in production. I thought I would take a bit of time to chat about what to think about when thinking about an Extended RAC. Here are some guidelines that need to be taken into account. As usual priority should be given to the following Oracle Best Practices in the implementation of such architectures and as Alex mentions in the video “Identifity what you really need for your business or organisation.”

Steven Chan had a really good diagram of MAA on his blog:

http://blogs.oracle.com/stevenChan/images/maa-target-architecture.png


All we would have to do is think of Extended RAC as an extension of the “Database Tier”. All the rest of the principals remain the same. http://blogs.oracle.com/stevenChan/

As mentioned by Alex “The hardest part (is) as you separate the datacenters, the latency between the site(s) increase and this is where the challenges are coming from.” Network latency is one of the biggest issues when coming to extended cluster type scenarios however there are other rules of thumb that need to be addressed;

i) Extended RAC over distances of 10-20 KM's are possible without being project inhibitive (cost of a fibre link is another story ;)). It's not to say longer distance Extended RAC's are not possible, there are some customers with Extended RAC's over 50 KM's in distance, however there are specific measures and QOS in place to ensure this configuration is viable (as well as the use of Dark Fibre technologies for maximum throughput).

ii) Redundancy is key! Make sure there is never a single point of failure, this includes the remote link! Dual NIC's, multiple RAC nodes, Disk redundancy, power redundancy, UPS, etc...

iii) Amount of Data? Is the amount of changing data too much for the link to keep up with? If we get this one wrong then you will forever be chasing your tail trying to catch up with the primary RAC.

As Alex mentions “Try to be as simple as possible” in architecting such solutions.


Let's not forget:

1 Gbit
= 1 000 000 000 bits /sec
= 125 000 000 Bytes /sec
= 119.209 MegaBytes /sec

This is the theoretical limit of gigabit, however real world scenarios usually show a significant impact of up to 50% on this theoretical limit. Calculate Calculate Calculate. I remember I was at a client site couple of years back trying to diagnose a throughput problem that was occurring on the systems. We bet the pizza we had ordered (was about midnight in a government department in New Zealand) on what was the cause of the problem. I threw up that it was a gigabit bottleneck and he threw up that it was Hard Disk latency.

At the end of the day we performed our calculations and figured out it was a big of a mixture of both so we split the costs of the pizza; however this points out one of the classic schools of thought that ‘there’s no way we’ll be flooding a gigabit link’ and now I see it most of the time.