Observing the Business Value of Information

Observing the Business Value of Information
The Stranded Oracle. (High Relevance, Severe Latency, Low Integration, Low Adoption). Author using Gemini

This is the second of six posts, where I identify how to assess the business value of information (BVI, from the Infonomics schema) from what can be observed within the organisation. As raised in a comment, this set of observables are just meant to get us off the ground internally. Ultimately, we'd want to be able to compare between organisations but we're not there yet. So, this is for analysts who want to start building an understanding of what's on hand, not to conduct an exhaustive analysis.

Key question: How well is this data wired into important business decisions?

Level 1 dimensions

  1. Relevance to priority outcomes
  2. Decision / process integration
  3. Coverage of the business problem
  4. Timeliness / Latency
  5. User adoption

Level 2 indicators and observables

  • Relevance
    • Number of strategic objectives / OKRs this dataset directly supports.
    • Number of P&L lines or risk metrics that depend on it.
    • Qualitative criticality rating from business owners (e.g. survey 1–5).
  • Decision / process integration
    • Count of live business processes that consume this data (e.g. pricing, underwriting).
    • Extent to which it is embedded in operational systems (decisioning engines, workflows) vs only exploratory analysis
    • Presence in regulatory / management reporting.
  • Coverage of the business problem
    • % of the relevant population represented (e.g. customers in target segment).
    • Number of required entities/attributes present for the decision logic.
    • Known blind spots (markets, segments, products not covered).
  • Timeliness / Latency
    • End‑to‑end latency from real‑world events to data availability (minutes/hours/days).
    • Frequency of refresh relative to decision cadence (e.g. daily vs monthly).
    • % of decisions made after the latest data arrives.
  • User adoption
    • Number of active users / teams relying on the dataset.
    • Frequency of queries / API calls / report views.
    • Inclusion in key dashboards used by leadership.

How to use

  • Score each dimension 1–5 based on observable thresholds.
  • High BVI implies clear line of sight to money/risk/regulation, strongly embedded in live processes and data arrives on time.

BVI scoring rubric (1–5)

Below is a concrete 1–5 scoring rubric for BVI (Business Value of Information) to drop into a paper or use as a working template. Tweak weights to fit your context. Assume BVI is the weighted sum of five dimensions:

  • Relevance – 25%
  • Decision / Process Integration – 25%
  • Coverage of Business Problem – 20%
  • Timeliness / Latency – 20%
  • User Adoption – 10%

1. Relevance to priority outcomes (weight 25%)

Question: How tightly is this dataset linked to top business goals (revenue, cost, risk, regulatory)?

Score

Descriptor

1 – 2 Irrelevant

No clear link to any current strategic objectives or KPIs. Used only for ad‑hoc curiosity analysis or legacy reports.

3 – 4 Peripheral

Supports minor reports or niche activities; not linked to core P&L lines or risk metrics. Can be removed with little business impact.

5 – 6 Supporting

Clearly supports at least one important KPI or regulatory requirement, but not mission‑critical. Other data could substitute with moderate loss of fidelity.

7 – 8 Important

Directly supports 1–2 top‑level objectives (e.g. revenue growth, churn reduction, capital efficiency). Loss of this data would be felt quickly in business performance.

9 – 10 Critical

Essential to multiple priority outcomes and board‑level metrics. Without it, major processes (pricing, underwriting, core risk controls) would fail or degrade severely.

2. Decision / process integration (weight 25%)

Question: Where and how is the data actually used?

Score

Descriptor

1 – 2 Not integrated

Only used in isolated spreadsheets or exploratory analytics. Not referenced in any standard reports, dashboards or production systems.

3 – 4 Reporting only

Appears in a few reports/dashboards, typically for descriptive analytics, but not wired into any operational decision engines or workflows.

5 – 6 Limited operational use

Used in at least one operational process (e.g. a model that influences offers, routing or prioritisation) but impact is local or small scale.

7 – 8 Broad operational use

Embedded in multiple workflows or decision engines across the organisation. Several teams rely on it daily for operational decisions.

9 – 10 Deeply embedded core

Hard‑wired into core systems (pricing engines, credit decisions, fraud blocks, regulatory reporting). The system cannot run as designed without this data.

3. Coverage of the business problem (weight 20%)

Question: How fully does this dataset represent the domain it is supposed to cover?

Score

Descriptor

1 – 2 Very partial

Covers <30% of the relevant population or events. Major customer segments/products/markets missing. Frequent blind spots.

3 – 4 Partial

Covers 30–60% of the relevant population. Important gaps known; needs substantial supplementation from other sources.

5 – 6 Adequate

Covers 60–80% of the relevant population. Some gaps remain, but they affect secondary segments or can be worked around.

7 – 8 High

Covers 80–95% of the relevant population. Only small or low‑value segments are missing. Fit‑for‑purpose for most use cases.

9 – 10 Near complete

>95% coverage of the target domain with minimal blind spots. Considered the de facto system of record for the business problem.

4. Timeliness / latency (weight 20%)

Question: Does the data arrive in time to influence the decisions it should support?

Score

Descriptor

1 – 2 Stale

Data is updated so infrequently (e.g. quarterly/annually) that it cannot support most relevant decisions; used only for long‑cycle reporting.

3 – 4 Often late

Updates lag decision cycles by more than one full cycle (e.g. weekly data for daily decisions). Frequently too late to act.

5 – 6 Generally adequate

Data is refreshed on the same cadence as the main decision cycle (e.g. daily for daily decisions), but with occasional delays or latency issues.

7 – 8Timely

Data is refreshed slightly ahead of or in sync with decision needs (e.g. hourly for daily decisions, daily for weekly), with rare delays.

9 – 10 Real‑time / just‑in‑time

Near real‑time or streaming; latency is well below the decision window (e.g. seconds/minutes). Designed explicitly to support time‑sensitive decisions.

5. User adoption (weight 10%)

Question: Who actually uses this data, and how often?

Score

Descriptor

1 – 2 Unused

Very few or no regular users. Access logs show minimal queries/reads. No named business owner.

3 – 4 Low adoption

Used by a single team or a few analysts; mostly pull‑only, ad‑hoc analysis. Limited awareness outside that group.

5 – 6Moderate adoption

Used by several teams. Appears in some regularly consumed reports/dashboards. A recognised owner exists.

7 – 8 High adoption

Used by many teams across functions. Appears in widely used dashboards and self‑service tools. Frequent queries / API calls.

9 – 10 Ubiquitous

Organisation‑wide dependence. Featured in executive dashboards, frontline tools, and multiple domains. Consumption is high and growing.

How to compute the composite BVI score

  1. For a given dataset/use case, have an analyst (ideally with a business counterpart) assign 1–5 for each dimension based on the descriptors.
  2. Compute a weighted average:

𝐵𝑉𝐼composite=0.25⋅𝑅 + 0.25⋅𝐼 + 0.20⋅𝐶 + 0.20⋅𝑇 + 0.10⋅𝑈

Where:

  • 𝑅 = Relevance score
  • 𝐼 = Integration score
  • 𝐶 = Coverage score
  • 𝑇 = Timeliness score
  • 𝑈 = User adoption score

Next up: the observables for the Performance Value of Information.