Research Blog - Customer Intelligence

OK - here's a hypothesis, no, more of an analogy: Options (calls and puts) are second order transactions. They're transactions about transactions, and they involve a shift in the time dimension and a capping or limiting in the value dimension. Similarly, we can have decisions about decisions; we can decide today that "I will make a decision 6 months hence" or "No matter what, Phil won't be deciding the issue". These are second order, or metadecisions. We can also make contingent decisions: "I will review your salary in 6 months. If revenue hasn't increased, you will not be getting an increase." No doubt a large chunk of what we mean by "manage" could be described as decisions about decisions about ... ad infinitum.

From an information-theoretic point of view, what is going on here? Well, to some extent we're creating options, and to another extent we're eliminating options. For the salary-review example, the manager has decided to remove the option of "increase" (implicitly leaving only "stay the same" or "decrease") contingent on some variable. Perhaps an approach is to enumerate all possible decision outcomes, and assign a probability of it being selected (from the point of view of the manager). Eg. "increase", "constant" and "decrease" are all equally likely. Hence, we can look at the entropy of the decision space, D:

E[-log D]

Obviously, the selection "increase" hinges on a random variable, R, that relates to revenue and the decision rule. By comparing entropy before and after certain events, we are measuring the change in decision selection entropy NOT as a measure of information - but intelligence. The events that lead to a change in entropy (or propensity to decide a certain way) would fall into three types:

1) Change in option structure (eg. merging, eliminating, creating) "I've been told I can't give you a decrease, regardless or revenue".
2) Change in decision rules (eg. contigency) "If revenue hasn't increased by 10%, you won't be getting an increase".
3) Change in parameters (eg. variable uncertainty) "Revenue will remain constant with 95% certainty".

Generally speaking, people like having options and will pay money to keep their options open. However, markets like people to relinquish options, so that it can operate more efficiently through planning and risk-sharing. For example, renting (should be) dearer than taking out a mortgage. Or if you promise to give Target all your business, you should get a modest discount. Basically, you help out the market, and the market kicks some back your way.

If options are valuable (and freely traded in secondary markets), why then, would managers eliminate them? (Partion their decision space.) Why would they knowingly in advance reduce the courses of action available to them? First guess: they rarely do it. Most managers I've dealt with are extremely reluctant to do this, and don't want to see targets or similar on their product reports. No one wants their business case coming back to bite them on the bum.

Second guess: it's a communication thing. Specifically, it's a negotiation thing. The motivation for telling your staff about the salary review, and the fact that it's tied to revenue, is an incentivation technique. The manager thinks that her staff will work better (ie increase revenues) knowing this: they will act differently ie make different decisions. The existence of this decision rule in the manager's head is a decision variable in the head of the staff. Thus, it falls into the domain of "threats and promises".

Third guess: it's a communication/negotiation thing between the manager and her boss/company. "See, I'm managing my staff for performance - please give me my bonus".

Where does this leave us? Perhaps a measurable and testable (ie normative/postivist) theory of decison-making could provide us with a basis for arguing what the effects of decision rules and parameters are. By linking these effects to money via Utility Theory, we could subsume the question of "what resources should I expend on changing my decision selection propensities?" into general Utility Theory (microeconomics and game theory). This then, might be of help to people when managing their information and improving the quality of decisions, and hence increase social welfare.



Wow - this time a month's delay. That's a new record!

Papers: I'm reading a set of papers by John Mingers, a leading thinker in the fundamental questions underpinning systems theories, including information systems. Particularly, I'm reading about information and meaning, and how autopoiesis can help explain this. This is definitely not for the faint-hearted, and in fact, reading this and related material makes me think that to really participate in this dialogue you'd need to have spent some time in universities - preferably Californian - during the late 60s, if you know what I mean. The most understandable (to me) idea of picked up so far is that "information is the propositional content of a sign". This is related to my concept of information ("information is the change in uncertainty of a proposition"), but in a way that's not entirely clear to me.

I'm also reading selected papers from ECIS 2000, particularly those dealing with economic analyses of information, such as operation of markets, and those dealing with customer operations, such as data mining.

Seminars: Lasty Tuesday I attented an industry seminar on creating value from Clickstream Analytics. It was a bit disappointing: in a nutshell SAS has put out a web log analyser, and the National Museum of Australia has started to analyse its web logs. Welcome to 1997.

This afternoon I attented a seminar by Prof Lofti Zadeh, a particularly famous researcher from the electrical engineer discipline who crossed over into computer science, but now appears to be heading fully into cognitive science (he developed fuzzy logic and possibility theory amongst things). His seminar was on precisiated natural language. The idea is that traditional analytical tools like calculus, predicate logic and probability theory are too limited in their ability to express propositions ("Robert is very honest", "it is hot today"). So he is promoting an approach to allow one to do formal computation on propositions by imposing constraints on them: it's a way of formally reasoning as you would with logic ("all men are mortal"), except you can incorporate "loose" terms such as "usually" and "about". In essence, it's a formalisation of the semantics in natural language: somewhat of a Holy Grail. I'm pretty sure he turned off a lot of linguists with his use of mathematics - good, I say.

People: I've meet a few times with Graeme. We're currently looking at two issues: 1) The ongoing intellectual property issue; 2) the 1-pager for Bill Nankervis (industry sponsor).

1) The University wants me to sign over all my IP rights to it. This causes me some concern, as it is the University's policy for post-grads to own their IP, except if they're in industry partnerships. My goal is to "open source" my research so that I (and anyone else) can criticise, use, extend and teach it to others. In academia, this done through publishing papers and theses. As sole and exclusive owners of the IP in perpetuity, the University can do what it likes: sell it, bury it, whatever. This makes me uncomfortable, as Australian universities - sadly - are under enormous funding pressure as the government weans them off public money. I'm in ongoing negotiations about how to best ensure that this state of affairs doesn't impact on my agenda.

2) I'm having to narrow and refine my resarch question further. I've tried coming at it from the top down, so now I'm trying from the bottom up:

The Satisfaction Wager: I Bet You Want Fries With That. A Game-Theoretic Approach to Anticipating Customer Requirements.


It seems that even when you get away from thinking about business and information technology, and start thinking about customers and information, I still read a lot of authors who talk in business-centric terms, about organisation functions: billing, sales, marketing, product development and so on. I'm trying to think within a paradigm of information about customers and hence ask "what sort of information does an organisation need about its customers to satisfy them?". I map it out from the point of view of what an organisation does with customer requirements.

Fulfilling Customer Requirements. Eg. Customer contact history. Service request/order state. Operations
Anticipating Customer Requirements. Eg. Customer demand. Changes in circumstances. Channel preferences. Planning/Sales/Marketing
Creating Customer Requirements. Eg. Customer opinions. Market expectations. Development/PR

I'm not entirely sure what a customer requirement is: there's a lot of literature around requirements engineering/analysis, but I think this is from a point of view of developing systems for use by an organisation that operates on customers. I'm talking here about something that looks more like a value proposition.

Anyway, grouping it this way might be a way into understanding the value of different types of information. For example, the "fulfilling" side of things is concerned with the "database world", records, tables, lists of details. The stakes for correctness are high: If you install ADSL in the unit next door, you won't get 80% of the money. The "anticipating" side is the "statistical world", where you deal with guesses: if someone gets onto a marketer's campaign target list and turns out to not be interested, it's not the end of the world. Finally, the "creating" side is where we deal with extremely fuzzy concepts of information to do with perceptions and opinions, such as "unfavourable" news articles, endorsements, sponsorship and the whole "branding" and "reputation" thing.

This latter category is definitely out of scope for me: I think I'll focus my efforts on the interaction between the database world and the statistical world. Hence the facetious title above: if you stood in a Maccas and observed the "up-sell" process, how would you assign value to the information involved? Ie what resources (risks) should you expend (accept) to (potentially) acquire what information? Is current order information enough? Does it help if you have historical information? How much difference does having an individual customer's history make, compared to a history of similar customers (segment history)? Does information about the customer's appearance matter? (Eg. compare in-shop with drive-through.) What about information pertaining to future behaviour (compare take-away with eat-in)? Lastly, what about the interaction of information about the McWorker? The time of day? The location? Etc.



Once again, it's been a couple of weeks since I've blogged. I'll quickly highlight - in reverse chronological order - the people, seminars and texts before going into a lengthy ramble about ... stuff.

People: I met with my supervisor Graeme this morning, and had a quick discussion about the spectrum of formality surrounding business decision making. See the below ramble. Last Monday I had lunch with Dr. Bob Warfield - former manager from Telstra and now something of a role model or mentor for me - and Dr. Peter Sember, data miner and machine learning colleague from Telstra's Research Labs. We discussed my research, industry news and gossip and collaboration prospects.

The Friday before I re-introduced myself to Dr. Tim van Gelder, a lecturer I had in a cognitive philosophy subject a few years ago. We discussed Tim's projects to do with critical thinking, and his consultancy, and possible synergies with my own research and practice in business intelligence. While there are similarities - the goal is a "good decision" - there are differences: I'm looking at the relationships between inputs to a decision (information and decision rules) and outcomes; he's looking at the process itself and ensuring that groups of people don't make reasoning "mistakes".

Seminars: I've attended two since last blog. The first one was on a cognitive engineering framework, and its application to the operational workflow analysis of the Australian Defence Force's AWACS service. (This is where I bumped into Tim.)

The second one was on the "Soft-Systems Methodology" being used as an extension to an existing methodology ("Whole of Chain") for improving supply chains. SSM looked to me like de-rigoured UML or similar. I'm not sure what value it was contributing to the existing method (I asked what their measures of success were, and they didn't have any), but they had quotes from a couple of workshop participants who thought it was helpful. So I figure that's their criteria: people accept it. They didn't report on whether or not some people thought it unhelpful. They didn't talk about proportions of people who responded favourably, and unfavourably, and then compare with people who participated in the "reference" scheme (ie without SSM). In short, since I wasn't bowled over by the obvious and self-evident benefits of their scheme, and they gave me no reason to think that it meets other people's needs better than existing schemes, I'm not buying it.

I have to confess I'm still getting my head around IS research.

Book: I read half of, but then lost (dammit!), a text on Decision Support Systems. It was about 10 years old, but had papers going back to the 60s in it! I don't have the title at hand, but Graeme's going to try and score another copy.

I've also discovered a promising text by Stuart MacDonald entitled Information for Innovation. This is the first text I've read that talks about the economics of INFORMATION as opposed to IT. (I read some lecture notes and readings on "information economics", but found it to be an argument for why organisations shouldn't apply traditional cost/benefit analyses to IT capex.) It's quite clear that information is unlike anything else we deal with, is extremely important in determing our quality of life and yet it is suprisingly poorly understood. I would like to make a contribution in this area, and I'm starting to think that Shannon's insights have yet to be fully appreciated.

Ramble: I've been thinking that to drill-down on a topic, I'm going to have to purge areas of interest. For example, some months ago I realised that I was only going to look at "intelligence" (as opposed to "content" - see below). Now, I'm thinking I need to focus on formal decision processes. Allow me to explain ...

There's a spectrum of formality with respect to decision-making. Up one end, the informal end, we have the massively complex strategic decisions which are made by groups of people, using a limitless range of information, with an implied set of priorities and unspoken methods. Example: the board's weekend workshop to decide whether or not to spin-off a business unit.

Up the other - formal - end, we have extremely simple decisions which are made by machines, using a defined set of information, with explicit goals and rules to achieve them. Example: the system won't let you use your phone because you didn't pay your bill.

The idea is that decisions can be delegated to other people - or even machines - if they are characterised sufficiently well for the delegator to be comfortable with the level of discretion the delegatee may have to employ. The question of what becomes formalised, and what doesn't, is probably tied up many things (eg politics), but I think a key one is "repeatability". At some point, organisations will "hard-code" their decisions as organisational processes. At other times, decision-makers will step in and resume decision-making authority from the organisation process (for example, celebrities don't get treated like you or me).

I'm thinking that for each process, you could imagine a "slider" control that sets how much decision-making is formalised, and how much is informal. This "slider" might have half a dozen states, relating to process functions:
  • Documenting Maintaining the authoritative process map

  • Recording Maintaining the authoritative current state of the process

  • Controling Driving/executing the process map, changing current state and prompting people where necessary

  • Designing Building, testing and deploying new or modified processes based on experience or simulation

  • Commissioning Determining if new or modified processes are required, and the goals, parameters and resources of the process


The more informal the decision, the more you'd need to look at group-think phenomena, cognitive biases, tacit knowledge and other fuzzy issues best left to the psychologists. I'm thinking that the formal or explicit processes are going to lend themselves best to my style of positivist analysis.

So in that sense, I'm inclined to look at metrics, and their role in decision-making for business processes (customer), service level agreements (supplier), and key performance indicators (staff). Typically, these things are parameterised models in that the actual specific numbers used are not "built into it". For example, a sales person can have a KPI as part of their contract, and the structure and administration of this KPI is separate from the target of "5 sales per day": it would be just as valid with "3" or "7" instead. Why, then, "5"? That is obviously a design aspect of the process.

Perhaps if these processes are measurably adding value (eg. the credit-assessment process stops the organisation losing money on bad debters), then it is reasonable to talk about the value of the metrics (both general-thresholds and instance-measures) in light of how they affect the performance of the process? If the process is optimised by the selection and use of appropriate metrics, then those metrics have value.

While I'm not sure about this, I think it's easier than performing a similar analysis on the value of an executives decisions.



This issue: People, books, seminar and more ramblings.

People: I caught up with xxxxx xxxxxxxx (name deleted on request 25/10/2006) and Joan Valdez, both former colleagues from Telstra days. They are now working in the CRM part of Telstra On Air (data systems). Also, I've been in touch with Dr. Peter Sember, a data miner from Telstra's New Wave Innovation labs. We worked on some projects to do with search query analysis and web mining, so I'm keen to collaborate with him again. Lastly, at the seminar (below), I forced myself upon Brigitte Johnson and Peter Davenport - more Telstra CRM people, but higher up and in Retail. I'm keen to let them know about my research, and look for opportunities there too.

Book: Peter Weill & Marianne Broadbent's book ("Leveraging the New Infrastructure"). This is far more scholarly than Larry P. English's book (below), probably due to the different perspective, purpose and audience. Hell, they even quote Aristotle!

The gist of their approach is to identify IT (and communications) expenditure over the last decade or more for a large number of companies (in excess of two dozen) in different industries. They then compare business outcomes (including competitive positioning) over the same period. By breaking down the spend into eg. firm-wide infrastructure and local business unit, they're able to discern IT strategies (eg. utility, enablement) and see how well they align with business strategy. From this, they draw a set of maxims (in the Aristotelean sense) by which organisations can manage their IT investments.

This seems very sensible. But, both the strength and the weakness of this approach is that it treats IT as a capital investment program, as part of an organisation's portfolio of assets. Throughout the book, you get a feeling that they might as well be talking about office space. I've not yet found any discussion about the value of information, separate from the capital items within which it resides. That is, it's very technology focussed. Also, the "money goes in, money comes out later" black-box thing has yet to shed any light (for me) on the fundamental question of WHERE IS THE VALUE? The approach might be useful for benchmarking, and would be useful for people responsible for managing investments in ALL the organisations activities, in that it put IT expenditure on an even footing with office space and stationery. But, I still have a sense that something is missing ...

So while I'm on a roll, I've got some more comments about Larry P. English's book. This guy is - I'm sure he won't consider this defaming - a quality fanatic. He is relentless in his pursuit of quality and I think that this is a good thing. I wish that my phone company and bank had the benefits of some quality fanatics and gurus. But his approach/advice leaves me thinking that the hard bit, the interesting bit, isn't being addressed. By that, I mean that it appears he assumes you already have rock-solid immutable business requirements handed down on an stone tablet. For example, (and I'll paraphrase here):


Suppose you have three customers of your data, and two want 99% accuracy and one wants 99.99% - then you must give them 99.99%


After working as a business analyst and supplier of information to decision-makers, I flinched when I read that. Honed by my corporate experience, my first reaction was "are they willing to PAY for 99.99% ?" - closely followed by "what's 99.99% WORTH to them?" followed by "what would they do with 99.99% that they WOULDN'T DO with 99%?". I think that this step - what you'd call the information requirements analysis - is what Larry's missing. (Behind every information requirement is an implicit decision model.) And this step is the gist of where this research is headed. Quality without consideration of value isn't helpful in the face of scarse resources when priotisation needs to occur.

Seminar: This morning I attended an industry learning seminar put on by Priority Learning. It was about CRM Analytics. There were talks by ING and Telstra on their experiences implementing this, and SAS and Acxiom on the how and why. The main message I took away from this was that ROI is not the last word in why you'd want to do this. Both ING and Telstra feel it has been and will be worthwhile, but neither could show ROI (as yet).

There was the usual mish-mash of definitions ("what is intelligence anyway?"), the usual vendor-hype/buyer-scepticism, the "this is not a technology - it's a way of life" talk, the usual biases (SAS flogging tools, Acxiom flogging data) - in short, an industry seminar! I'm glad though, that my supervisor Graeme was able to come as it has given him a better view of where the CRM/Customer Intelligence practice is, and the questions that my research is asking.

Re: the practice side. There seems to be an emerging consensus that there is an Analytic component, and an Operational component, usually entwined in some sort of perpetual embrace, possibly with a "Planning" or "Learning" phase thrown in. This was made explicit through the use of diagrams. This is par for the course, though from what I've seen I have to say I like the Net.Genesys model better.

One aspect I found interesting was the implicit dichotomy inherint in "data" (or information, or intelligence - the terms are used interchangably): facts about customers, and facts about facts about customers. The former is typically transactions embedded in ER models that reflect business processes. The latter is typically parameters of models that reflect business processes.

Consider the example of a website information system (clickstream log). Here's two "first order" customer facts:

"Greg Hill", "11/3/01 14:02", "homepage.html"
"John Smith", "11/3/01 14:02", "inquiry.html"

Here's two "second order" customer facts:

"50% of customers are Greg Hill"
"100% of times are 11/3/01 14:02"

The former only makes sense (semantically) in the context of an ER model, or a grammar of some type (the logical view). The latter only makes sense in the context of a statistical model (quantitative view). Certainly the same business process can be modelled with say UML, or a Markov Model, and then "populated/parameterised" with the measurements. They will have different use (and value) to different decision-makers - a call centre worker would probably rather the fact "Greg Hill has the title Mr" - if they were planning on calling me. A market analyst would probably prefer the fact "12% of customers have a title of Dr" - if they were planning an outbound campaign.

But what does information in one domain tell us about information in the other? How does information move back-and-forth between these two quite different views? How does that improve decision-making? How does that generate value?

One last ramble: modelling decisions. To date, I've been thinking that the value of customer information lies in the decisions that it drives, in creating and eliminating options for decision-makers (like above). But nearly all the examples presented today involve implicit modelling of the decision-making of customers: When do they decide to churn? Which product will they be up-sold to? Which channel do they want to be reached through? That is, we're talking about making decisions about other people's decisions: "If I decide to make it free today, then no one will decide to leave tomorrow".

Modelling the mental states of other people involves having a Theory of Mind (a pet interest of mine from cognitive philosophy). Hence, if you take the view that communication is the process of changing our perception of the uncertainty residing in another person's mind ("we don't know anyone else - all we have access to is our own models of them" etc), then marketing really is a dialogue. With yourself. This begs the question: do autistic people - who allegedly lack a Theory of Mind - make particularly bad marketers? Does anyone even know what makes for a particularly bad marketer?

So, putting it together, I'm modelling decision-making about decision-making by looking at facts about facts about customers and how this relates to decision-making about facts about customers.

I need a lie down.



Wow - it's certainly been a while since I've contributed to this blog. First, some texts.

I've been reading Larry P. English's "Information Quality" book. Not very scholarly, seems to be loaded with good advice, but lacks a grasp on data, information, intelligence, representation etc. Ie the interesting and difficult theoretical stuff. He sets up a sequence ("data" -> "information" -> "knowledge" -> "wisdom") and more less says that each one is the former one plus context. Whatever that means. Anyway, I'm sure the book would be useful for data custodians or managers of corporate information systems, but I expect it would have limited use to decision-makers and business users, as well as researchers.

Also been going through an introductory book on "Accounting Concepts for Managers". I figure that a lot of accounting concepts and jargon have found their way into information systems, as evidenced by Dr. Moody's paper which proposes historical cost as the basis for the value of information (see below). While I respectfully disagree, it has motivated me to pick up more of these ideas.

Lastly, in the meeting with Graeme this morning we discussed Daniel's paper further, and information economics in general. I got a lead into this area from Mary Sandow-Quirk (Don Lamberton's work) and I'll also chase up Mingers' papers on semiotics. We seem to agree that the value of information lies in its use, and in an organisational context that means decisions. Hence, I've got a book on "Readings in Decision Support System", which will tie in with Graeme's work on Data Tagging for decision outcomes (see below).
We also discussed further our proposed joint paper on SLAs for customer data and analytics, and our plan to put together a "newsletter" document every month or so for Bill Nankervis, our industry sponsor.

I read a Paul Davies book on biogenesis, or the origins of life. A lot of his arguments revolved around complex systems, and the emergence of biological information. The ideas - genes as syntax, emergent properties of semantics, evolution as a (non-teleological) designer - are similar to Douglas Hoffstadter's classic "Godel, Escher, Back - Eternal Golden Braid" book. Davies' book, though, is nowhere near as playful, lively or interesting. Still, it had some good material on the information/entropy front, which got me to thinking about "information systems" in general, and my inability to define one.

Here's a proposed definition set:
A System is an object that can occupy one of an enumerable (possibly infinite) set of states, and manipulations exists that can cause transitions between these states.

A State is a unique configuration of physical properties.

There are two kinds of systems: Artefacts are those systems with an intentional (teleological) design. Emergent systems are everything else.

An Information system is no different from any other system - any system capable of occupying states and transitioning between them can be used to represent or process information. Some properties make certain systems more or less suitable for information systems (ie it's a quality difference).

The key one is that the effort required to maintain a certain state should be the same for all states. This means that - via Bayes' Rule - the best explanation for why a particular state is observed is that it was the same state someone left it in. (This is getting tantalisingly close to entropy/information/semiotics cluster of ideas.)

For example, a census form has two adjacent boxes labelled "Male" and "Female", and a tick in one is just as easy to perform/maintain as a tick in the other. On the other hand, if you were to signify "I'm hungry" by balancing a pencil on its end, and "I'm full" by lying it on its side, you'd go a long time between meals. Hence, box-ticking makes for a higher-quality information systems than pencil-balancing. The change in significance per effort-expended is maximised. The down-side - errors creep in due to noise. (And they say Shannon has no place in Information Systems theory!)

Another view: a system is a set of possible representations. The greater our uncertainty at "design time", the bigger the set of representations it can maintain. As we apply layer after layer of syntax, we are in effect restricting the set of states available. For example, if we have a blank page that can represent any text, we may restrict it to only accept English text. And then only sentences. And then only propositions. And then only Aristotelean syllogisms. We're eliminating possible representations.

By excluding physical states, we're decreasing the entopy, which according to the Second Law of Thermodynamics ("entropy goes up") means that we're pushing it outside the system (ie it's open). The mathematically-inclined would say that the amount of information introduced is equal to the amount of entropy displaced.

Then, at "use time", the system is "populated" and our uncertainty lies in knowing which state in the subset of valid ones is "correct". At different levels of syntax, we could define equivalences between certain states. One idea I'm kicking around is that this equivalence or isomorphism the key to the problem of semantics (or the emergence of meaning). More reading to do!



I've been thinking about the value people ascribe to information (as per my thesis!) and I'm of the view that, from a value perspective, there's two broad categories:


  • Content

  • Intelligence



Here, content refers to some strange and inexplicable mechanism whereby people appreciate some experience. This can be music, a video, web page, newspaper article, phone conversation, opera etc. In the affluent West, where food and shelter are assured, it is the reason we get up in the morning.

We can measure content in a variety of ways - most obviously duration of experience (time). Generally, the longer the experience, the more we value it. Other measures relate to quality (eg. if the sound is scratchy, the print hard to read or the video requires subtitles then we may value it less). It's hard for me to see a unifying theory for valuing this kind of thing, as it is very subjective. Yet, we do it every day: what's a CD worth, a movie ticket, a phone call etc? In the information age, we are continually valuing content.

What about entropy (mathematical information) measures? I recall a former housemate of mine - a PhD student in communications engineering and applied maths - joked that the entropy in a Bollywood Indian movie approaches zero, since the plot/characters/dialogue/score etc is all completely predictable. Since entropy requires a (parametric) model, what would that be for movies? This is a weird question, and one that I will stay well away from in my research. I suspect that this analysis is properly the domain of a branch of philosophy that deals with aesthetics.

The other category was intelligence. By this, I don't mean it in a directly cognitive sense. I mean it in a sense that, historically, stemmed from the military. So, "I" as in "CIA", not as in "IQ" or "AI". Hence, "Business Intelligence" is about producing actionable information. That is, information that you are required to make a decision and act upon.

For example, if customer numbers don't reach a certain threshold at a particular moment in time, then the product is exited. This decision rule is the model, and the customer count is the parameter. Often, the decision rule is more valuable than the actual metric. This confirms a long-held piece of wisdom: questions are more valuable than answers.

The appropriate measure for intelligence, then, is the extent to which you acted differently. For business intelligence, it is the financial consequences of this action. The idea of entropy (mathematical information) can be applied to measuring the uncertainty in the decision itself. For example, suppose there are two options: take the high road, take the low road. Intially, each is equally likely of being chosen, or acted upon (50%). If some event causes that to shift (20% / 80%), then the change in probabilities can be related to the influence of that event on the decision. That change in decision can have a value ascribed to it using regular decion analysis. It seems reasonable to me to ascribe that value to the the change of probabilities resulting from that event: the value of the intelligence.

I plan to look at options pricing theory (including "real options" analysis). This is a school of thought that links concepts of value, uncertainty (risk) and time with decisions, and is typically applied to investment decisions, specifically, the futures and derivatives markets. It can be applied to a much wider range of decision-making too.

In setting up a "content/intelligence dichotomy" it's interesting to consider the boundary. For example, "news" feels like intelligence, but is it? I am happy to receive headlines on my mobile phone via GSM, but I don't actually do anything differently: news of a celebrity's passing doesn't prompt me to do anything I wouldn't have done anyway. Yet I value it anyway, so it's content. What about politics? Voting is compulsory (well, turning up is anyway). What about weather reports? For other cities? Things to keep in mind as I stumble along ...



Last week, I reviewed a paper by a former PhD candidate of Graeme's - Daniel Moody (with Peter Walsh). They work at Simsion Bowles & Associates. The paper was presented at ECIS '99 and is called "Measuring the Value of Information: An Asset Valuation Approach". As the title suggests, it is very much in line with my thesis topic. The thrust of the paper is that organisations should treat information as a type of asset, and it should be accorded the same accounting principles as regular assets. The paper goes onto highlight the ways that information is different from regular assets, and hence how it should be treated differently.

The paper suggests seven "Laws" about information-as-an-asset, and proposes that (historical) cost accounting should be the basis for determing the value. This was done without reference to information economics. While I disagree with the both approach and the conclusions/recommendations in this paper, I am given great heart to see that there is a dearth of research in this area. I'm confident that this is a thesis-sized problem!

I am also pleased to finally see an IS researcher cite Shannon's fundamental analysis of "information" - even if I disagree with the conclusion. I'm puzzled, though, that the whole Sveiby/KM thing wasn't mentioned at all. (There was a passing mention of "information registers" but that was it.)

In other news, Graeme and I met with our industry sponsor - Bill Nankervis (Telstra/Retail/Technology Services/..?.../Information Management). I met with Bill a couple of times before while I was still a Telstra employee, but this was our first meeting as researcher/sponsor. We discussed some of Telstra's current goals and issues with regard to information value and data quality, and I'm confident that there is a strong alignment between my work experience, my thesis and Bill's objectives and approach.



Had the regular weekly supervision session with Graeme. Today we discussed the relationship between theories and frameworks, especially in light of Weber's 1st chapter and Dr. Hitchman's seminar (below). Mostly we looked at Graeme's paper on "The Impact of Data Quality Tagging on Decision Outcomes". The main feedback I had was the idea that people will use pre-existing knowledge about the decision task to infer data quality when they aren't presented with any explicitly. In the terms of Graeme's semiotic framework, the social-level "leaks" into the semantic-level. One approach - potentially underway - to control this is to use completely contrived decision tasks totally unfamiliar to the subjects. Also, I'm curious about how the tagging (quality metadata) of accuracy relates to "traditional" measures of uncertainty such as variance and entropy. Lastly, it seems that this research is heading towards exploring the relationships between data quality and decision quality. Ie consensus, time taken, confidence etc seem to be attributes of a decision, and teasing out the significance of data quality constructs on these outcomes would be a whole field of research in itself.

The other topic we discussed was the idea for a joint paper on Service Level Agreements for outsourced customer information. This would be an application of Graeme's framework to the question of how to construct, negotiate and implement an SLA for the provision of customer information services. I think this is quite topical as while CRM is taking off, organisations are shying away from the underlying data warehousing infrastructure. The paper would involve researching ideas of information-as-a-service and service quality theories and my own experiences as a practitioner. The motivation would be to show that data quality issues are a business problem, and can't be contained soley to the IT department. While it's not the main thrust of my thesis, it would be a nice introduction to the "trade" aspects of the research process (ethics, reviews, peer assessment, publication etc).

Lastly, there was a stack of actions for Graeme, involving chasing up information from various people (industry co-sponsor and former PhD student). I've borrowed two books: "Leveraging the New Infrastructure" (Weill and Broadbent) and "Quality Information and Knowledge" (Huang, Lee and Wang).



This morning we had a seminar from Dr. Stephen Hitchman on "Data Muddelling". In essence, he was saying that the IS academy has lost its way and is failing practioners in this subject area. That is, the program of seeking a sound basis for data modelling in various philosophies is a waste of tax-payers' resources and that, if anything, we should be looking at the work of Edward De Bono.

I'm not sure that I accept that my role as an IS researcher is to ensure that everything I do is of immediate relevance to practitoners. Academic research is risky, and involves longer time scales. Low-risk, quick-delivery research can be directly funded by the beneficiaries, and there are a number of organsiations who will take this on. This is part of the "division of labour" of IS research.

That said, Stephen's provocative stance has failed to dissuade me from finishing the introduction to Ron Weber's monologue on "The Ontological Foundations of Information Systems".



Last Friday, there was a seminar on "decision intelligence". I was keen to go, but unexpected family business whisked me away. After reading the abstract (below), I think while it may of been of general interest, it probably wasn't related to my research domain. It would, however, be of extreme interest and relevance to people working in large, complex and dynamic organisations, who are required to lobby somewhat-fickle decision-makers.


Predicting people's policymaking styles


Dr Ray Wyatt


School of Anthropology, Geography and Environmental Studies, University of Melbourne


ABSTRACT:

Rather than "decision support", the focus is on "decision intelligence" for policymaking. This involves anticipating what policies different kinds of people are likely to favor. Such anticipation enables us to guess how much any proposed policy is likely to be accepted within the community - a consideration that can be just as vital for its ultimate success as any amount of logical, empirical or analytical "support". Therefore, this presentation begins by looking at the planning literature and at the decision-making literature for clues as to how to anticipate people's policy choices. But on finding very few, a radically different approach is outlined. It uses the speaker's own self-improving, advice-giving software which collects enough knowledge, about its past users' decision-making styles, to identify what policymaking criteria different sorts of people tend to emphasize. Such people-specific emphases will be outlined. They should help all professionals, everywhere, to foreshadow the community acceptance of any policy within any problem domain.



This is the website of one Karl-Erik Sveiby: http://www.sveiby.com.au/. He appears to be a leading researcher and practitioner - even pioneer - of the field of knowledge management. He has some interesting ideas on valuing non-tangible assets, and some very sensible things to say about organisational performance metrics. While his Intangible Asset Monitor is similar to ideas encapsulated in the Balanced Score Card methodology, he is at pains to point out the differences.

I wish I'd caught his seminar in my department last semester, but, the ".au" suggests he might be back.


Uh oh - a week's gone by without any blog postings. Hardly the point. Okay, a quick review then. I've been having regular weekly meeting with my supervisor, Graeme Shanks. So far, the discussion is around two topics: 1) the nature of research in the IS discipline and 2) Graeme's research in data quality. Of the former, I've been reading papers on IS research approaches (experiments, case studies, action research, coneptual studies etc) and stages (theory building, theory testing, and the difference between scholarship and research).

Of the latter, I've been getting across Graeme's approach, based on semiotic theory - the use of signs and symbols to convey knowledge. There may be collaboration opportunities to apply this framework to some of my professional work in defining and negotiationg Service Level Agreements with Application Service Providers, who primarily provide data and reports. While this isn't the thrust of my research, it might prove to be an interesting and useful (ie publishable) area.

The main gist, though, is on the value of information. This is no doubt related to the quality of data - probably through the notion of quality as "fitness for purpose". To that end, this week I'm looking into a text on the "Ontological Foundations of Information Systems" (Weber) and reviewing another of Graeme's papers on the role of quality in decision outcomes. I will also begin in earnest a look into information economics. I've attended some lectures on Game Theory, which, along with Decision Theory, will probably be a formalist way in.

I'm mindful of the relevance vs rigour aspects of this though, as I expect that models of how entities make decisions bears little resemblance to what people actually do in organisations. I think, generally, the benefits of a model lie in what is left out as much as anything.



This is my first post to a blog. I plan to post links and commentary to this blog as a journal of my research. I guess the audience is me (in the future) and friends, colleagues and well-wishers who have a passing interest in this topic, a web connection, and too much spare time. Hopefully, this will lend some legitimacy to my web browsing.

So first off, here's my homepage. I'm a PhD candidate in the Information Systems Department, working on an industry-sponsored research project with Telstra Corporation on, well, something to do with the value of customer intelligence.


Home