Research Blog - Customer Intelligence

I've been thinking about the value people ascribe to information (as per my thesis!) and I'm of the view that, from a value perspective, there's two broad categories:


  • Content

  • Intelligence



Here, content refers to some strange and inexplicable mechanism whereby people appreciate some experience. This can be music, a video, web page, newspaper article, phone conversation, opera etc. In the affluent West, where food and shelter are assured, it is the reason we get up in the morning.

We can measure content in a variety of ways - most obviously duration of experience (time). Generally, the longer the experience, the more we value it. Other measures relate to quality (eg. if the sound is scratchy, the print hard to read or the video requires subtitles then we may value it less). It's hard for me to see a unifying theory for valuing this kind of thing, as it is very subjective. Yet, we do it every day: what's a CD worth, a movie ticket, a phone call etc? In the information age, we are continually valuing content.

What about entropy (mathematical information) measures? I recall a former housemate of mine - a PhD student in communications engineering and applied maths - joked that the entropy in a Bollywood Indian movie approaches zero, since the plot/characters/dialogue/score etc is all completely predictable. Since entropy requires a (parametric) model, what would that be for movies? This is a weird question, and one that I will stay well away from in my research. I suspect that this analysis is properly the domain of a branch of philosophy that deals with aesthetics.

The other category was intelligence. By this, I don't mean it in a directly cognitive sense. I mean it in a sense that, historically, stemmed from the military. So, "I" as in "CIA", not as in "IQ" or "AI". Hence, "Business Intelligence" is about producing actionable information. That is, information that you are required to make a decision and act upon.

For example, if customer numbers don't reach a certain threshold at a particular moment in time, then the product is exited. This decision rule is the model, and the customer count is the parameter. Often, the decision rule is more valuable than the actual metric. This confirms a long-held piece of wisdom: questions are more valuable than answers.

The appropriate measure for intelligence, then, is the extent to which you acted differently. For business intelligence, it is the financial consequences of this action. The idea of entropy (mathematical information) can be applied to measuring the uncertainty in the decision itself. For example, suppose there are two options: take the high road, take the low road. Intially, each is equally likely of being chosen, or acted upon (50%). If some event causes that to shift (20% / 80%), then the change in probabilities can be related to the influence of that event on the decision. That change in decision can have a value ascribed to it using regular decion analysis. It seems reasonable to me to ascribe that value to the the change of probabilities resulting from that event: the value of the intelligence.

I plan to look at options pricing theory (including "real options" analysis). This is a school of thought that links concepts of value, uncertainty (risk) and time with decisions, and is typically applied to investment decisions, specifically, the futures and derivatives markets. It can be applied to a much wider range of decision-making too.

In setting up a "content/intelligence dichotomy" it's interesting to consider the boundary. For example, "news" feels like intelligence, but is it? I am happy to receive headlines on my mobile phone via GSM, but I don't actually do anything differently: news of a celebrity's passing doesn't prompt me to do anything I wouldn't have done anyway. Yet I value it anyway, so it's content. What about politics? Voting is compulsory (well, turning up is anyway). What about weather reports? For other cities? Things to keep in mind as I stumble along ...



Last week, I reviewed a paper by a former PhD candidate of Graeme's - Daniel Moody (with Peter Walsh). They work at Simsion Bowles & Associates. The paper was presented at ECIS '99 and is called "Measuring the Value of Information: An Asset Valuation Approach". As the title suggests, it is very much in line with my thesis topic. The thrust of the paper is that organisations should treat information as a type of asset, and it should be accorded the same accounting principles as regular assets. The paper goes onto highlight the ways that information is different from regular assets, and hence how it should be treated differently.

The paper suggests seven "Laws" about information-as-an-asset, and proposes that (historical) cost accounting should be the basis for determing the value. This was done without reference to information economics. While I disagree with the both approach and the conclusions/recommendations in this paper, I am given great heart to see that there is a dearth of research in this area. I'm confident that this is a thesis-sized problem!

I am also pleased to finally see an IS researcher cite Shannon's fundamental analysis of "information" - even if I disagree with the conclusion. I'm puzzled, though, that the whole Sveiby/KM thing wasn't mentioned at all. (There was a passing mention of "information registers" but that was it.)

In other news, Graeme and I met with our industry sponsor - Bill Nankervis (Telstra/Retail/Technology Services/..?.../Information Management). I met with Bill a couple of times before while I was still a Telstra employee, but this was our first meeting as researcher/sponsor. We discussed some of Telstra's current goals and issues with regard to information value and data quality, and I'm confident that there is a strong alignment between my work experience, my thesis and Bill's objectives and approach.



Had the regular weekly supervision session with Graeme. Today we discussed the relationship between theories and frameworks, especially in light of Weber's 1st chapter and Dr. Hitchman's seminar (below). Mostly we looked at Graeme's paper on "The Impact of Data Quality Tagging on Decision Outcomes". The main feedback I had was the idea that people will use pre-existing knowledge about the decision task to infer data quality when they aren't presented with any explicitly. In the terms of Graeme's semiotic framework, the social-level "leaks" into the semantic-level. One approach - potentially underway - to control this is to use completely contrived decision tasks totally unfamiliar to the subjects. Also, I'm curious about how the tagging (quality metadata) of accuracy relates to "traditional" measures of uncertainty such as variance and entropy. Lastly, it seems that this research is heading towards exploring the relationships between data quality and decision quality. Ie consensus, time taken, confidence etc seem to be attributes of a decision, and teasing out the significance of data quality constructs on these outcomes would be a whole field of research in itself.

The other topic we discussed was the idea for a joint paper on Service Level Agreements for outsourced customer information. This would be an application of Graeme's framework to the question of how to construct, negotiate and implement an SLA for the provision of customer information services. I think this is quite topical as while CRM is taking off, organisations are shying away from the underlying data warehousing infrastructure. The paper would involve researching ideas of information-as-a-service and service quality theories and my own experiences as a practitioner. The motivation would be to show that data quality issues are a business problem, and can't be contained soley to the IT department. While it's not the main thrust of my thesis, it would be a nice introduction to the "trade" aspects of the research process (ethics, reviews, peer assessment, publication etc).

Lastly, there was a stack of actions for Graeme, involving chasing up information from various people (industry co-sponsor and former PhD student). I've borrowed two books: "Leveraging the New Infrastructure" (Weill and Broadbent) and "Quality Information and Knowledge" (Huang, Lee and Wang).



This morning we had a seminar from Dr. Stephen Hitchman on "Data Muddelling". In essence, he was saying that the IS academy has lost its way and is failing practioners in this subject area. That is, the program of seeking a sound basis for data modelling in various philosophies is a waste of tax-payers' resources and that, if anything, we should be looking at the work of Edward De Bono.

I'm not sure that I accept that my role as an IS researcher is to ensure that everything I do is of immediate relevance to practitoners. Academic research is risky, and involves longer time scales. Low-risk, quick-delivery research can be directly funded by the beneficiaries, and there are a number of organsiations who will take this on. This is part of the "division of labour" of IS research.

That said, Stephen's provocative stance has failed to dissuade me from finishing the introduction to Ron Weber's monologue on "The Ontological Foundations of Information Systems".



Last Friday, there was a seminar on "decision intelligence". I was keen to go, but unexpected family business whisked me away. After reading the abstract (below), I think while it may of been of general interest, it probably wasn't related to my research domain. It would, however, be of extreme interest and relevance to people working in large, complex and dynamic organisations, who are required to lobby somewhat-fickle decision-makers.


Predicting people's policymaking styles


Dr Ray Wyatt


School of Anthropology, Geography and Environmental Studies, University of Melbourne


ABSTRACT:

Rather than "decision support", the focus is on "decision intelligence" for policymaking. This involves anticipating what policies different kinds of people are likely to favor. Such anticipation enables us to guess how much any proposed policy is likely to be accepted within the community - a consideration that can be just as vital for its ultimate success as any amount of logical, empirical or analytical "support". Therefore, this presentation begins by looking at the planning literature and at the decision-making literature for clues as to how to anticipate people's policy choices. But on finding very few, a radically different approach is outlined. It uses the speaker's own self-improving, advice-giving software which collects enough knowledge, about its past users' decision-making styles, to identify what policymaking criteria different sorts of people tend to emphasize. Such people-specific emphases will be outlined. They should help all professionals, everywhere, to foreshadow the community acceptance of any policy within any problem domain.



This is the website of one Karl-Erik Sveiby: http://www.sveiby.com.au/. He appears to be a leading researcher and practitioner - even pioneer - of the field of knowledge management. He has some interesting ideas on valuing non-tangible assets, and some very sensible things to say about organisational performance metrics. While his Intangible Asset Monitor is similar to ideas encapsulated in the Balanced Score Card methodology, he is at pains to point out the differences.

I wish I'd caught his seminar in my department last semester, but, the ".au" suggests he might be back.


Uh oh - a week's gone by without any blog postings. Hardly the point. Okay, a quick review then. I've been having regular weekly meeting with my supervisor, Graeme Shanks. So far, the discussion is around two topics: 1) the nature of research in the IS discipline and 2) Graeme's research in data quality. Of the former, I've been reading papers on IS research approaches (experiments, case studies, action research, coneptual studies etc) and stages (theory building, theory testing, and the difference between scholarship and research).

Of the latter, I've been getting across Graeme's approach, based on semiotic theory - the use of signs and symbols to convey knowledge. There may be collaboration opportunities to apply this framework to some of my professional work in defining and negotiationg Service Level Agreements with Application Service Providers, who primarily provide data and reports. While this isn't the thrust of my research, it might prove to be an interesting and useful (ie publishable) area.

The main gist, though, is on the value of information. This is no doubt related to the quality of data - probably through the notion of quality as "fitness for purpose". To that end, this week I'm looking into a text on the "Ontological Foundations of Information Systems" (Weber) and reviewing another of Graeme's papers on the role of quality in decision outcomes. I will also begin in earnest a look into information economics. I've attended some lectures on Game Theory, which, along with Decision Theory, will probably be a formalist way in.

I'm mindful of the relevance vs rigour aspects of this though, as I expect that models of how entities make decisions bears little resemblance to what people actually do in organisations. I think, generally, the benefits of a model lie in what is left out as much as anything.


Home