According to Wikipedia, Just In Time (JIT) is a production strategy that strives to improve a business’ return on investment by reducing in-process inventory and associated carrying costs.
The JIT philosophy aims to align different operators in the supply chain, hence avoiding delays at critical handover moments. These delays are in turn responsible for increasing stock volumes. By applying JIT, stock levels tend to go down, service levels go up and time-to-market improves.
So what about Business Intelligence?
In a Business Intelligence process, you will find a lot of Supply Chain alike elements. Groups of people needing to hand over to other groups: Business people dreaming up a Business Case, The analyst creating a Business Evaluation, the internal BI team translating specs, The Finance team evaluating budget and ROI, the vendor management team handling RFP, Management team signing off on a project, and so forth. During a project, development teams hand over to testers and key users need to involve the Business user for user acceptance. This cycle will run for many times and interaction tend to be complex rather than linear.
Before you get results back to the Business, there are quite some areas in which you will find stock piling and hand-over delays. These issues are in some ways quite similar to the Supply Chain example.
So how does this impact time to market, process cost, and delivery?
What about return on assets and return on investment?
Akin to Just-In-Time, the domain of Business Intelligence offers some methods to improve through time and avoid unnecessary delays
- Using an iterative (Agile/scrum) method for delivery shortens the feedback loop
- Automate code generation instead of hand coding practice
- Use tools and workflows capable of handing over work between teams
- Avoid double work, ideally you would like to have a from-whiteboard-to-development-back-to-business process flow
- User BICC principles – making sure that all assets involved (inside or outside) as a part of a team
As with Just-In-Time, you want to measure the flow and define critical moments. As in the Supply Chain world, you want to make sure the process itself is improving as you go along.
DWA (Data Warehouse Automation) seems to be a good technique, covering all of the above.
By shifting markers across the chain you may further improve time to market and service level. Today it is still current that business users get involved at the beginning an at the end of a flow.
Why do we need to do this?
Early prototyping techniques may allow you to get user adoption way before the dev team gets in action, which will save a lot of time and confusion obviously. Using a good prototyping procedure may cut away the need for late-in-chain user validation & may indeed allow you to avoid costly rework altogether.
However, some Business users do not like abstract methods – they want prototypes to carry real relevant data. That’s why Data Discovery can be so popular in this stage – or why Proof of Concept exercises tend to focus on this.
But it can create frustration thereafter. First because it will invariably show errors, likely due to data quality issues, which will cause early stage resistance, but far worse: because it sets over-optimistic expectations on time to market
Think about it: if it only takes a couple of days to show a prototype based on a relevant data sample, why would users then feel comfortable to wait months to be able to work with actual results.
Concepts such as Governed Data Discovery make that the prototype exercise is done on real data, before, during and sometimes even after the development cycle.
In conclusion – Just In Time and the desire to continuously work to improve your supply chain has given birth to techniques such as lean manufacturing and other process improvement techniques. These have already been adopted in software development and to some extent in the BI dev teams – but in my view,
the approach could benefit the whole of the Business Intelligence supply chain.
Good to read this article. I also look forward to reading the content on Data Warehouse Automation in this blog. Some questions…
* Where is ‘Governed Data Discovery’ defined?
* Do you have any specific thinking for the Lean principle of ‘optimizing the whole’ with regards to Just-In-Time Business Intelligence? Sure, Agile has some strengths, but in my view, too often in Agile Business Intelligence implementation, the whole (architecture, design, standardization, and quality) take a back seat to any conscious consideration of ‘the whole’, let alone the optimization of the whole. The data model, as well as the data extraction, transformation and loading (ETL) is typically either rushed, or if not, it ends up being taken out of Sprints, since both of them, when performed as a big effort up front, take too long for a single Sprint. Thoughts?
LikeLike