Observing the Economic Value of Information (EVI)
Key Question: What is the net financial contribution from this data?
Level 1 dimensions
- Revenue uplift
- Cost savings / avoidance
- Risk reduction / capital efficiency
- Net value after costs and risk
Level 2 indicators and observables
- Revenue uplift
- Incremental revenue attributable to data‑driven decisions (e.g. price optimisation, cross‑sell).
- New products or services that depend on this data.
- Increased customer lifetime value where data plays a proven role.
- Cost savings / avoidance
- Reduced operational costs (automation, fewer errors).
- Lower loss ratios / fraud / claims leakage.
- Avoided capex/opex by using data instead of physical experiments.
- Risk reduction / capital efficiency
- Reduced regulatory fines / incidents.
- Lower capital charges due to better risk models.
- Improved resilience (e.g. fewer outages, better contingency planning).
- Net value
- EVIgross = revenue uplift + cost savings + risk reduction (in $).
- EVInet = EVIgross − CVI (including governance spend).
- NPV of EVInet over expected data lifecycle.
How to use
- EVI should be tied to your finance systems: map uplift and savings to P&L lines where possible.
- Rank assets by EVInet and EVI/CVI ratio to drive capital allocation.
Turning this into an assessment hierarchy
To make this usable in the wild:
- Define a 1–10 scale for each dimension
- For each dimension (e.g. IVI: Validity), specify what 1, 5, and 7 look like with concrete numbers or examples.
- Create a one‑page rubric per metric
- Rows = dimensions.
- Columns = score 1–10 with short descriptors (<10% valid, 50%, >70%, etc.).
- Analysts pick the best‑fit score based on the observables above.
- Weight dimensions
- Not all dimensions matter equally.
- Calibrate with exemplars
- Pick 3–5 known datasets (one exemplar, one average, one problem case).
- Score them independently with the rubric; refine thresholds until scores align with expert intuition.
- Document how to score someone else’s case
- For external assessments (e.g. vendor data, acquisition due diligence), specify:
- What evidence you expect (sample data profiles, contracts, case studies).
- Which dimensions you can score directly vs which need proxies (e.g. you may see adoption, but not internal EVI).
- For external assessments (e.g. vendor data, acquisition due diligence), specify:
Assume EVI is the weighted sum of four dimensions:
- Revenue uplift – 30%
- Cost savings / avoidance – 30%
- Risk / capital impact – 20%
- Net value after costs – 20%
EVI scoring rubric (1–5)
1. Revenue uplift (weight 30%)
Question: How much incremental revenue is attributable to using this data?
2. Cost savings / avoidance (weight 30%)
Question: How much cost does this data save or help avoid?
3. Risk reduction / capital efficiency (weight 20%)
Question: How does this data improve risk management, capital use, or regulatory outcomes?
4. Net value after costs (weight 20%)
Question: What is the net economic contribution once we include CVI (all relevant data costs)?
5. Computing the composite EVI score
- For each dataset/use case, assign 1–5 for each dimension based on the descriptors.
- Compute a weighted average:
𝐸𝑉𝐼=0.30⋅𝑅𝑢+0.30⋅𝐶+0.20⋅𝑅𝑘+0.20⋅𝑁
Where:
- 𝑅𝑢 = Revenue uplift score
- 𝐶 = Cost savings / avoidance score
- 𝑅𝑘 = Risk / capital impact score
- 𝑁 = Net value after costs score
We can then rank assets by EVI and by EVI/CVI to guide capital allocation.